Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Saturday, March 14, 2026

Brief summary of AI news items this week

"When a robotics chief leaves the fastest-growing AI company in the world, it’s easy to call it “internal drama.” But her reason matters.

OpenAI recently signed a deal with the United States Department of Defense — despite being founded on the principle that powerful AI should benefit humanity and not be weaponized by governments.

For the robotics chief, that principle had quietly disappeared. She chose to leave rather than have her name tied to what comes next.

The concern is bigger than one contract. Military AI rarely stays in the military. Technologies built for war — surveillance systems, autonomous targeting, and behavioral pattern recognition — often move into civilian life within years.

From her perspective, this wasn’t just a disagreement.
It was a refusal to  legitimize a direction she believed could reshape society in ways the public never chose. "    Shared from - https://www.instagram.com/p/DVm8K3QiDrm/?


"Anthropic has released a striking report on how AI could reshape the job market.

Jobs at the highest risk: software developers, financial analysts, and customer service roles.

Groups most affected: women, white workers, older employees, and high-income earners.

But there’s an important nuance: the biggest impact may not be mass layoffs, but companies simply hiring fewer people.

The group most affected could be recent college graduates, whose risk is estimated to be 4× higher.

Entry-level hiring has already dropped about 14% since the launch of ChatGPT, particularly in high-risk occupations.

Safest jobs: bartenders, dishwashers, beach lifeguards, generally physical, hands-on work that AI cannot yet automate.

These roles account for roughly 30% of the labor market.

The most concerning part: AI already has the technical capability to automate many tasks, but widespread disruption is slowed by regulation and the gradual pace of corporate adoption. The main barrier is not skills, it’s acceptance and implementation

The report is based on real data but also includes theoretical modeling, so it should be read with caution. Some manual labor jobs still lack sufficient data for analysis."   Shared from - https://www.instagram.com/p/DVjiNeYjd-g/?
 

Grammarly removes AI Expert Review feature mimicking writers after backlash  

"Grammarly has disabled a controversial AI feature that imitated the style of prominent writers and academics, and is facing a multimillion dollar lawsuit from those whose identities were used without consent.

The feature, called Expert Review, used generative AI to produce feedback supposedly inspired by writers including the novelist Stephen King, the astrophysicist and author Neil deGrasse Tyson, and the late scientist Carl Sagan.

A class-action lawsuit has been filed in the southern district of New York against Superhuman, Grammarly’s parent company. The lawsuit argues that using a person’s name for commercial gain without permission is unlawful, and argues that damages due across the plaintiff class are in excess of $5m (£3.7m).

Since Grammarly’s feature has come to public attention, a number of writers have spoken out about being included."


Saturday, March 7, 2026

"Struggle is not a flaw in thinking. Struggle is the thinking." (video)

"An idea that some may have forgotten. Struggle is not a flaw in thinking.
Struggle is the thinking.

Sitting in confusion. Following the thread. Feeling the discomfort of holding multiple ideas to be true at once. That’s how a person builds real understanding. The kind where you can creatively expand and synthesize ideas because you truly know what you think.

That’s also what education is supposed to do. Not just produce answers, but develop humans who can wrestle with complexity, form formidable conclusions and hold their ground in a messy reality.

In an AI world where answers are instant and the quality of endless content is questionable, this ability becomes rarer, and more valuable. Let's use this lens to evaluate the role of tech in learning."

Bloomberg's full interview with Meredith Whittaker is available on YouTube.




Friday, March 6, 2026

(1) No one knows what the future is. So hedge your bets. (2) Abilities AI can't do (yet)


"This is the first time in history nobody has any idea what the world will look like in 10 years — what the job market will look like, what social relations will look like, et cetera. So hedge your bets. Don’t focus on a narrow subject like coding. 

Give equal importance to your head (intellectual skills), your heart (social skills) and your hands (motor skills). It is in the combination of these three that humans still have a large advantage over A.I."




(2)

"For the past several months, I’ve been wrestling with a question that is becoming unavoidable for all of us: Where does human capability end, and where does artificial intelligence capability begin?​

Some people look at AI with concern, others with excitement, and many with curiosity. ​

Instead of focusing only on what AI is becoming capable of doing, I decided to reflect on something different: which abilities remain deeply, fundamentally human.​

Over the past months, I identified 30 capabilities that, in my judgment, AI still cannot truly perform — and these are precisely the skills where I am intentionally investing more of my own development. ​

Abilities AI can't do (yet)
 Abilities AI can't do (yet)
As a chemical engineer, I organized them as a kind of “Periodic Table of Human Abilities”  grouping them into six dimensions:​
• Judgment and Decision-Making ⚖️​
• Influence and Communication 🗣️​
• Emotional Connection ❤️​
• Contextual and Social Awareness 🌍​
• Human Essence and Growth 🌱​
• Adaptability and Creativity 🎨​

The more powerful technology becomes, the clearer one idea feels to me: our future advantage will not come from competing with machines, but from strengthening what makes us human. ​

Do you agree with these capabilities?​

Is there any skill you believe is missing — or any that you think AI may start replicating sooner than we expect? "


Tuesday, December 30, 2025

DESE series for "Public school educators" on "core principles of AI literacy"

"Public school educators are invited to a 6-week virtual series about the core principles of AI literacy.

Based on the DESE Office of EdTech's online module, AI Literacy for Educators, educators will explore how to navigate this technology with curiosity, caution, and a human-centered approach.

Sessions will take place virtually at 3 PM on Tuesdays from January 6 to February 10.
Learn more and sign up online: https://ow.ly/fQVe50XNSAM "

DESE series for "Public school educators" on "core principles of AI literacy"
DESE series for "Public school educators" on "core principles of AI literacy"


"Registration is reserved for individuals currently employed in Massachusetts Public Schools, Charter Schools, Vocational Technical Schools, and Virtual Schools. Please be advised that DESE does not authorize attendees to record or to use AI transcription tools during the meeting, and DESE does not endorse any unauthorized transcripts created by third parties of its meetings."

Sunday, November 23, 2025

Franklin TV: Synthespians, Part 2 !

You don’t want to become one.

by Pete Fasciano, Executive Director 11/23/2025


Franklin TV: Synthespians, Part 2 !
Franklin TV: Synthespians, Part 2 !
We’ve oft heard, ‘God is in the details’.
And heard, ‘The Devil is in the details’.

Both can be true.

This tenet is attributed to art historian Aby Warburg, (1866–1929).

Aby spoke to the degree of discipline required to achieve true artistic mastery.

For 6 months Particle 6 studios tweaked details, iterating and refining models to develop Tilly. Yes, God is indeed in the details.

The contentious 2023 Hollywood strike involving actors (SAG/AFTRA), writers (WGA) and directors (DGA) was among other issues, about protecting the voice, image, and likeness of actors in a future AI enabled world. A noble cause. We all deserve to be protected from digital AI cloning that could easily be deployed to cast any of us in any manner of compromising statements and scenarios – ‘deep fakes’. In all of cyberspace, every person should enjoy a wholly protected exclusive right to their own voice, image and likeness.

The ELVIS Act, (Ensuring Likeness Voice and Image Security), is a Tennessee state law. It protects individuals from the unauthorized commercial use of their identity, particularly with AI-generated content. Governor Bill Lee signed the ELVIS Act into law on March 21, 2024. It updates Tennessee’s 1984 Personal Rights Protection Act, which protected name, photograph, and likeness, but not voice. The ELVIS Act is groundbreaking as the first legislation to specifically address AI’s impact on voice and likeness.

Given the issues around the studio embrace of AI (and future Tillys) there will likely be another contentious Hollywood strike at some point. Given that studio developed AI synthespians and avatars won’t protest, it might well be the last.
The devil is indeed in the details.

And – as always –

Thank you for watching. 
Thank you for listening to wfpr●fm.
And staying informed at Franklin●news.

In case you missed part 1, you can find it here ->

Get this week's program guide for Franklin.TV and Franklin Public Radio (wfpr.fm) online  http://franklin.tv/programguide.pdf 


Watch Listen Read all things that matter in Franklin MA
Watch Listen Read all things that matter in Franklin MA

Sunday, November 9, 2025

Franklin TV: The Synthespians Are Coming !

And Hollywood is – Totally – Freaking – Out.

by Pete Fasciano, Executive Director 11/09/2025

Synthespian? Sounds like a strange other-worldly being from a far away planet. Not so. Synthespians are home-grown – by computers. They’re synthetic thespians; actors, created via artificial intelligence systems. You describe the physical traits, characteristics, mannerisms, behaviors and such that define your synthespian’s looks and personality, crunch that data, and digital voila – out s/he pops – in fulgent, totally ready-to-perform form. Background extras – aka ‘Phillip Space’? Digital extras have roamed the background for years. Now they’re ready for their close-up.

Tilly Norwood
Tilly Norwood
Such is the creation of Tilly Norwood; a synthetic thespian entity – a ‘digital actor’ created by Eline Van der Velden at Particle 6. Tilly made her debut in late September at the Zurich Film Festival. Reactions?

If you’re talent, or any number of other production craft positions from the director of photography to the set painters and production assistants –Tilly Norwood represents Artificial Armaggedon. Hollywood’s hyper-reactive fury is raging like a California wildfire, anticipating a scorched Earth future for the industry.

Many Hollywood creatives have their heads in the sand on this. They wail and moan about genuine human emoting, character development, the humanity that visionary directors and ‘A’ list actors bring to the screen – and so on. However, if you’re a movie studio, what’s not to like? Tilly might well be a studio’s dream ‘talent’. She is infinitely accommodating and always at the ready, absent any off-stage drama, histrionics or guile ( – unless the script calls for that). She is as pure as the driven snow ( – if the script calls for that).

My point? We have seen this movie before. Literally – in 1995. Pixar’s ‘Toy Story’ was the first CGI feature-length animation – 30 years ago.. Then as now, the backbone of every Pixar and Disney animation since Toy Story has been precisely about that – Story. The technology wasn’t the star. The story was, and every story well-told is dripping with humanity, pathos, soul, wit, trauma, redemption, and on.

So, where is all this going? Extract Woody, Buzz Lightyear, Micky, Minnie, et al. Insert Tilly. The modern animation studio has 3 successful decades of refining story, preproduction and so on. The CGI production pipeline is mature and ready. Working with Tilly and other (studio created and owned) digital entities who will surely follow will be just another data at the office. In a mere handful of years the difference between a CGI animations and blockbuster features will be –

They call it Tilly Norwood
They call it Tilly Norwood
Another key point: There are already several AI services that will gladly ‘rent’ their ‘Tilly equivalents’ as directable avatars. Because they have been trained by movies to move and speak like humans do in movies, they have ‘life experience’ and can ‘act’. They say what you want ‘em to say and move how you want ‘em to move. Like the fella sed – they can walk and chew gum at the same time ( – if the script calls for that). They are digital virtual spokespeople – able spokes-folks for hire.

Are we at Franklin.TV interested? Yes. Exploring? Yes. Working with? Not quite yet, but perhaps sooner than later.

And – as always –
Thank you for watching 
Thank you for listening to wfpr●fm
And staying informed at Franklin●news

 

Get this week's program guide for Franklin.TV and Franklin Public Radio (wfpr.fm) online  http://franklin.tv/programguide.pdf 


Watch Listen Read all things that matter in Franklin MA
Watch Listen Read all things that matter in Franklin MA

Tuesday, November 4, 2025

How to Get Ahead with AI? (video)

Special seminar on Artificial Intelligence, featuring Mr. Vishal Tiruveedi, a proud alumnus of Franklin High School, and Mr. K.P. Sompally, a distinguished School Committee Member, and David Callaghan, Chairman of the School Committee. 

They share valuable insights on the growing impact of AI in education, together, they’ll explore how AI is transforming our world, inspiring the next generation of innovators, discuss the fascinating world of Artificial Intelligence, and its role in shaping our future.




How to Get Ahead with AI? (video)
How to Get Ahead with AI? (video)

Sunday, October 19, 2025

Seminar on why AI is so important - Oct 20 at Franklin TV Studio 6 PM

Seminar on why AI is so important - October 20,2025 at Franklin TV Studio at 6 PM. This in person seminar will be recorded and available for broadcast later.


Seminar on why AI is so important - Oct 20 at Franklin TV Studio 6 PM
Seminar on why AI is so important - Oct 20 at Franklin TV Studio 6 PM

 

Tuesday, October 22, 2024

Tell Linkedin: Consent Matters


Last month, LinkedIn announced it would begin collecting user data to train their AI systems. 

You may have missed this news, so we wanted to alert you and share how you can opt-out of having your data used to train LinkedIn's generative AI systems. We believe in consent culture which means that corporations must clearly explain how they use your personal data and get your permission before using it in new ways. 

My years of experience in computer science have revealed time and again that responsible AI requires meaningful transparency and informed consent. 

Join all of us at the Algorithmic Justice League and send a powerful message to LinkedIn about consent by opting out of their AI data collection program today.

You should also know that LinkedIn is not treating all users equally. European Union users are protected from automatic enrollment thanks to their data protection regulations. Clearly, LinkedIn is taking advantage of legal loopholes in the United States. 

Just because platforms like LinkedIn don't charge a fee doesn't mean users like you and I automatically consent to our data being used for AI training — right? Follow the steps below (or follow this link) to opt out of having your personal data — your words and your images — used as training for LinkedIn's corporate AI systems.

  1. Go to LinkedIn, then navigate to "Settings."
  2. Choose "Data Privacy."
  3. Locate "Data for Generative AI Improvement."
  4. Toggle the button to "Off "to opt out.

Fighting for algorithmic justice takes all of us, please spread the word.

In solidarity, 

Dr. Joy

President
Algorithmic Justice League

Buolamwini

ABOUT THE ALGORITHMIC JUSTICE LEAGUE (AJL)

 

The Algorithmic Justice League is an organization that combines art and advocacy to illuminate the social implications and harms of artificial intelligence (AI). We are reducing AI harms in society and increasing the accountability in the use of AI systems.

© Copyright Algorithmic Justice League 2023
All rights reserved

Tuesday, September 17, 2024

Voices of Franklin: KP Sompally offers insights on the use of medical alert devices

Empowering Safety and Well-being Through Advanced Technology

In an increasingly unpredictable world, safety and health are top concerns for vulnerable populations, particularly seniors and school personnel. The need for reliable emergency response systems, such as medical alert devices for seniors and panic buttons for educators, has never been more critical. As these systems evolve, integrating cutting-edge Artificial Intelligence technologies, they become indispensable tools for ensuring swift and effective emergency responses.

Protecting Seniors: The Role of AI in Medical Alert Systems

Medical alert systems are life-saving devices that provide immediate access to emergency services in case of falls, medical emergencies, or other crises. For seniors, especially those living alone, these systems are crucial in safeguarding their health and well-being. Modern medical alert systems now utilize AI technologies to enhance their effectiveness, going beyond basic functions.

AI-powered medical alert devices can monitor daily activities, detect anomalies in behavior, and predict potential health issues before they become emergencies. For instance, some devices can analyze gait patterns to identify the risk of falls, providing preventive alerts. Additionally, AI-driven voice recognition and natural language processing allow seniors to communicate their needs without having to press a button, making help more accessible even in cases where mobility is impaired.

These advancements not only improve response times but also empower seniors to live independently for longer, with the peace of mind that help is always within reach.

Enhancing School Safety: AI-Enabled Panic Buttons for School Personnel

Safety in schools has become a paramount concern for educators, students, and parents alike. Panic buttons provide immediate access to emergency services during critical situations, such as security threats or medical emergencies. Regardless of whether these panic buttons are used regularly, having them in place can save lives.

AI technology is revolutionizing panic button systems in schools by offering features such as real-time location tracking, intelligent threat assessment, and automated alerts to local authorities. AI can quickly assess the severity of a situation and prioritize responses, ensuring that the right resources are dispatched promptly. For instance, in cases of active threats, AI systems can analyze data from various sources—such as security cameras, social media, and communication channels—to provide real-time insights and facilitate faster decision-making by authorities.

Even when these systems are not in frequent use, their presence acts as a deterrent and provides a safety net that reassures school personnel and students alike.

A Commitment to Safety

Medical Alert Systems
Medical Alert Systems

As our society becomes more technologically advanced, the integration of AI in medical alert systems for seniors and panic buttons for school personnel is a natural progression towards ensuring the safety and well-being of vulnerable populations. These technologies offer the promise of faster responses, predictive capabilities, and enhanced communication during emergencies, ultimately saving lives and providing peace of mind.

It is imperative that we continue to invest in and support the development of AI-driven safety systems to protect those who need it most, whether they are seniors living independently or educators shaping the future in our schools.

For various Medical Alert Systems you can check this secure website ->   https://www.topmedalerts.com

KP Sompally
Franklin

Monday, April 15, 2024

Boston Globe: "Spotting a deepfake: Eight tips and tells"

"Deceptive deepfakes seem to be everywhere these days, making it harder than ever to sort the true from the false. While there’s no silver bullet to address the threat posed by generative AI, here are a few techniques to guard against disinformation.

1. Take your time, look closely
As humans, we are hardwired to focus on the face. But while many of today’s AI-image generators can create lifelike faces, it pays to spend a little time looking at other aspects of an image. AI is apt to cut corners and that’s where things can get weird. Look at the background. Does it make real-world sense? Does everything line up? How about people other than the image’s primary subject? Is there a phantom limb? Maybe a sixth finger?"
Continue reading the tips on how to detect deep fakes! (subscription maybe required)  https://www.bostonglobe.com/2024/04/11/arts/how-to-spot-deepfake-tips-ai/

Visitors can watch videos and guess if the images are real or fake. The MIT Museum's exhibit "AI: Mind the Gap" looks at deepfake video technology.LANE TURNER/GLOBE STAFF
Visitors can watch videos and guess if the images are real or fake. The MIT Museum's exhibit "AI: Mind the Gap" looks at deepfake video technology. LANE TURNER/GLOBE STAFF



Tuesday, July 11, 2023

What does AI get trained on? Copyrighted material, apparently without permission of the owner

Aside from the fact that AI is neither artificial nor "intelligent", ChatGPT was trained on info as of 2019 (4 years ago (and getting older each day)), and also, as claimed by this lawsuit, to include copyrighted data that was not permissioned for such use.
"Tools like ChatGPT, a highly popular chatbot, are based on large language models that are fed vast amounts of data taken from the internet in order to train them to give convincing responses to text prompts from users.

The lawsuit against OpenAI claims the three authors “did not consent to the use of their copyrighted books as training material for ChatGPT. Nonetheless, their copyrighted materials were ingested and used to train ChatGPT.” The lawsuit concerning Meta claims that “many” of the authors’ copyrighted books appear in the dataset that the Facebook and Instagram owner used to train LLaMA, a group of Meta-owned AI models.

The suits claim the authors’ works were obtained from “shadow library” sites that have “long been of interest to the AI-training community”.
Continue reading the article online ->
It is claimed that Sarah Silverman and the other authors’ works were obtained from ‘shadow library’ sites. Photograph: Rich Fury/Getty Images for THR
It is claimed that Sarah Silverman and the other authors’ works were obtained from ‘shadow library’ sites. Photograph: Rich Fury/Getty Images for THR