Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Sunday, November 9, 2025

Franklin TV: The Synthespians Are Coming !

And Hollywood is – Totally – Freaking – Out.

by Pete Fasciano, Executive Director 11/09/2025

Synthespian? Sounds like a strange other-worldly being from a far away planet. Not so. Synthespians are home-grown – by computers. They’re synthetic thespians; actors, created via artificial intelligence systems. You describe the physical traits, characteristics, mannerisms, behaviors and such that define your synthespian’s looks and personality, crunch that data, and digital voila – out s/he pops – in fulgent, totally ready-to-perform form. Background extras – aka ‘Phillip Space’? Digital extras have roamed the background for years. Now they’re ready for their close-up.

Tilly Norwood
Tilly Norwood
Such is the creation of Tilly Norwood; a synthetic thespian entity – a ‘digital actor’ created by Eline Van der Velden at Particle 6. Tilly made her debut in late September at the Zurich Film Festival. Reactions?

If you’re talent, or any number of other production craft positions from the director of photography to the set painters and production assistants –Tilly Norwood represents Artificial Armaggedon. Hollywood’s hyper-reactive fury is raging like a California wildfire, anticipating a scorched Earth future for the industry.

Many Hollywood creatives have their heads in the sand on this. They wail and moan about genuine human emoting, character development, the humanity that visionary directors and ‘A’ list actors bring to the screen – and so on. However, if you’re a movie studio, what’s not to like? Tilly might well be a studio’s dream ‘talent’. She is infinitely accommodating and always at the ready, absent any off-stage drama, histrionics or guile ( – unless the script calls for that). She is as pure as the driven snow ( – if the script calls for that).

My point? We have seen this movie before. Literally – in 1995. Pixar’s ‘Toy Story’ was the first CGI feature-length animation – 30 years ago.. Then as now, the backbone of every Pixar and Disney animation since Toy Story has been precisely about that – Story. The technology wasn’t the star. The story was, and every story well-told is dripping with humanity, pathos, soul, wit, trauma, redemption, and on.

So, where is all this going? Extract Woody, Buzz Lightyear, Micky, Minnie, et al. Insert Tilly. The modern animation studio has 3 successful decades of refining story, preproduction and so on. The CGI production pipeline is mature and ready. Working with Tilly and other (studio created and owned) digital entities who will surely follow will be just another data at the office. In a mere handful of years the difference between a CGI animations and blockbuster features will be –

They call it Tilly Norwood
They call it Tilly Norwood
Another key point: There are already several AI services that will gladly ‘rent’ their ‘Tilly equivalents’ as directable avatars. Because they have been trained by movies to move and speak like humans do in movies, they have ‘life experience’ and can ‘act’. They say what you want ‘em to say and move how you want ‘em to move. Like the fella sed – they can walk and chew gum at the same time ( – if the script calls for that). They are digital virtual spokespeople – able spokes-folks for hire.

Are we at Franklin.TV interested? Yes. Exploring? Yes. Working with? Not quite yet, but perhaps sooner than later.

And – as always –
Thank you for watching 
Thank you for listening to wfpr●fm
And staying informed at Franklin●news

 

Get this week's program guide for Franklin.TV and Franklin Public Radio (wfpr.fm) online  http://franklin.tv/programguide.pdf 


Watch Listen Read all things that matter in Franklin MA
Watch Listen Read all things that matter in Franklin MA

Tuesday, November 4, 2025

How to Get Ahead with AI? (video)

Special seminar on Artificial Intelligence, featuring Mr. Vishal Tiruveedi, a proud alumnus of Franklin High School, and Mr. K.P. Sompally, a distinguished School Committee Member, and David Callaghan, Chairman of the School Committee. 

They share valuable insights on the growing impact of AI in education, together, they’ll explore how AI is transforming our world, inspiring the next generation of innovators, discuss the fascinating world of Artificial Intelligence, and its role in shaping our future.




How to Get Ahead with AI? (video)
How to Get Ahead with AI? (video)

Sunday, October 19, 2025

Seminar on why AI is so important - Oct 20 at Franklin TV Studio 6 PM

Seminar on why AI is so important - October 20,2025 at Franklin TV Studio at 6 PM. This in person seminar will be recorded and available for broadcast later.


Seminar on why AI is so important - Oct 20 at Franklin TV Studio 6 PM
Seminar on why AI is so important - Oct 20 at Franklin TV Studio 6 PM

 

Tuesday, October 22, 2024

Tell Linkedin: Consent Matters


Last month, LinkedIn announced it would begin collecting user data to train their AI systems. 

You may have missed this news, so we wanted to alert you and share how you can opt-out of having your data used to train LinkedIn's generative AI systems. We believe in consent culture which means that corporations must clearly explain how they use your personal data and get your permission before using it in new ways. 

My years of experience in computer science have revealed time and again that responsible AI requires meaningful transparency and informed consent. 

Join all of us at the Algorithmic Justice League and send a powerful message to LinkedIn about consent by opting out of their AI data collection program today.

You should also know that LinkedIn is not treating all users equally. European Union users are protected from automatic enrollment thanks to their data protection regulations. Clearly, LinkedIn is taking advantage of legal loopholes in the United States. 

Just because platforms like LinkedIn don't charge a fee doesn't mean users like you and I automatically consent to our data being used for AI training — right? Follow the steps below (or follow this link) to opt out of having your personal data — your words and your images — used as training for LinkedIn's corporate AI systems.

  1. Go to LinkedIn, then navigate to "Settings."
  2. Choose "Data Privacy."
  3. Locate "Data for Generative AI Improvement."
  4. Toggle the button to "Off "to opt out.

Fighting for algorithmic justice takes all of us, please spread the word.

In solidarity, 

Dr. Joy

President
Algorithmic Justice League

Buolamwini

ABOUT THE ALGORITHMIC JUSTICE LEAGUE (AJL)

 

The Algorithmic Justice League is an organization that combines art and advocacy to illuminate the social implications and harms of artificial intelligence (AI). We are reducing AI harms in society and increasing the accountability in the use of AI systems.

© Copyright Algorithmic Justice League 2023
All rights reserved

Tuesday, September 17, 2024

Voices of Franklin: KP Sompally offers insights on the use of medical alert devices

Empowering Safety and Well-being Through Advanced Technology

In an increasingly unpredictable world, safety and health are top concerns for vulnerable populations, particularly seniors and school personnel. The need for reliable emergency response systems, such as medical alert devices for seniors and panic buttons for educators, has never been more critical. As these systems evolve, integrating cutting-edge Artificial Intelligence technologies, they become indispensable tools for ensuring swift and effective emergency responses.

Protecting Seniors: The Role of AI in Medical Alert Systems

Medical alert systems are life-saving devices that provide immediate access to emergency services in case of falls, medical emergencies, or other crises. For seniors, especially those living alone, these systems are crucial in safeguarding their health and well-being. Modern medical alert systems now utilize AI technologies to enhance their effectiveness, going beyond basic functions.

AI-powered medical alert devices can monitor daily activities, detect anomalies in behavior, and predict potential health issues before they become emergencies. For instance, some devices can analyze gait patterns to identify the risk of falls, providing preventive alerts. Additionally, AI-driven voice recognition and natural language processing allow seniors to communicate their needs without having to press a button, making help more accessible even in cases where mobility is impaired.

These advancements not only improve response times but also empower seniors to live independently for longer, with the peace of mind that help is always within reach.

Enhancing School Safety: AI-Enabled Panic Buttons for School Personnel

Safety in schools has become a paramount concern for educators, students, and parents alike. Panic buttons provide immediate access to emergency services during critical situations, such as security threats or medical emergencies. Regardless of whether these panic buttons are used regularly, having them in place can save lives.

AI technology is revolutionizing panic button systems in schools by offering features such as real-time location tracking, intelligent threat assessment, and automated alerts to local authorities. AI can quickly assess the severity of a situation and prioritize responses, ensuring that the right resources are dispatched promptly. For instance, in cases of active threats, AI systems can analyze data from various sources—such as security cameras, social media, and communication channels—to provide real-time insights and facilitate faster decision-making by authorities.

Even when these systems are not in frequent use, their presence acts as a deterrent and provides a safety net that reassures school personnel and students alike.

A Commitment to Safety

Medical Alert Systems
Medical Alert Systems

As our society becomes more technologically advanced, the integration of AI in medical alert systems for seniors and panic buttons for school personnel is a natural progression towards ensuring the safety and well-being of vulnerable populations. These technologies offer the promise of faster responses, predictive capabilities, and enhanced communication during emergencies, ultimately saving lives and providing peace of mind.

It is imperative that we continue to invest in and support the development of AI-driven safety systems to protect those who need it most, whether they are seniors living independently or educators shaping the future in our schools.

For various Medical Alert Systems you can check this secure website ->   https://www.topmedalerts.com

KP Sompally
Franklin

Monday, April 15, 2024

Boston Globe: "Spotting a deepfake: Eight tips and tells"

"Deceptive deepfakes seem to be everywhere these days, making it harder than ever to sort the true from the false. While there’s no silver bullet to address the threat posed by generative AI, here are a few techniques to guard against disinformation.

1. Take your time, look closely
As humans, we are hardwired to focus on the face. But while many of today’s AI-image generators can create lifelike faces, it pays to spend a little time looking at other aspects of an image. AI is apt to cut corners and that’s where things can get weird. Look at the background. Does it make real-world sense? Does everything line up? How about people other than the image’s primary subject? Is there a phantom limb? Maybe a sixth finger?"
Continue reading the tips on how to detect deep fakes! (subscription maybe required)  https://www.bostonglobe.com/2024/04/11/arts/how-to-spot-deepfake-tips-ai/

Visitors can watch videos and guess if the images are real or fake. The MIT Museum's exhibit "AI: Mind the Gap" looks at deepfake video technology.LANE TURNER/GLOBE STAFF
Visitors can watch videos and guess if the images are real or fake. The MIT Museum's exhibit "AI: Mind the Gap" looks at deepfake video technology. LANE TURNER/GLOBE STAFF



Tuesday, July 11, 2023

What does AI get trained on? Copyrighted material, apparently without permission of the owner

Aside from the fact that AI is neither artificial nor "intelligent", ChatGPT was trained on info as of 2019 (4 years ago (and getting older each day)), and also, as claimed by this lawsuit, to include copyrighted data that was not permissioned for such use.
"Tools like ChatGPT, a highly popular chatbot, are based on large language models that are fed vast amounts of data taken from the internet in order to train them to give convincing responses to text prompts from users.

The lawsuit against OpenAI claims the three authors “did not consent to the use of their copyrighted books as training material for ChatGPT. Nonetheless, their copyrighted materials were ingested and used to train ChatGPT.” The lawsuit concerning Meta claims that “many” of the authors’ copyrighted books appear in the dataset that the Facebook and Instagram owner used to train LLaMA, a group of Meta-owned AI models.

The suits claim the authors’ works were obtained from “shadow library” sites that have “long been of interest to the AI-training community”.
Continue reading the article online ->
It is claimed that Sarah Silverman and the other authors’ works were obtained from ‘shadow library’ sites. Photograph: Rich Fury/Getty Images for THR
It is claimed that Sarah Silverman and the other authors’ works were obtained from ‘shadow library’ sites. Photograph: Rich Fury/Getty Images for THR


Wednesday, May 10, 2023

News Literacy Project: "News literacy in the age of AI"

Via the News Literacy Project:  
"Chatbots like ChatGPT that are built on generative artificial intelligence technologies — a set of algorithms that can “generate” content based on a large dataset — have captured the world’s imagination. Reactions to this great leap forward have ranged from enthusiastic to alarmed.

This technology is evolving rapidly and to keep up we must understand its powers and perils. Generative AI can help us automate mundane tasks or supercharge our online searches, but it could also be weaponized to create and spread disinformation at a dizzying pace.

AI will impact the digital landscape in ways we have yet to imagine. But we do know that news literacy skills and knowledge — like checking your emotions before you share content, consulting multiple sources or doing a quick reverse image search — will be more vital than ever."
The News Literacy Project has compiled a set of resources to define AI and to help determine how to identify it. https://newslit.org/ai/

News Literacy Project: "News literacy in the age of AI"
News Literacy Project: "News literacy in the age of AI"

Wednesday, March 22, 2023

Alert: Realistic-looking but fake AI images make news literacy skills more urgent


Fake images showing the supposed arrest of former President Donald Trump are circulating on social media, but they're generated using artificial intelligence and are not authentic.

Hello Franklin,

Fake images showing the supposed arrest of former President Donald Trump are circulating on social media, but they're generated using artificial intelligence and are not authentic. Share this RumorGuard entry now and let your friends and family know about this new kind of convincing (and often misleading) technology.

 
Several images seeming to show former President Donald Trump being arrested along with a tweet that reads,

These images of Trump being arrested are fakes generated by AI

After former President Donald Trump announced that he expected to get arrested for charges related to a hush money payment to adult film star Stormy Daniels, a flurry of images circulated on social media. These were generated using artificial intelligence and depicted Trump being taken into custody. Let's take a closer look at these AI-generated images.

Share this RumorGuard entry using these easy links:


 
Did someone forward you this email?
Join the RumorGuard by signing up for our email alerts here.
 

RumorGuard, a resource from the News Literacy Project, helps you stay on top of viral misinformation that is trending online with clear, concise explanations of credible fact-checks. It also provides insights and resources designed to help you take control of your social media feeds and help others avoid being exploited by falsehoods.

We want to hear from you! Provide your feedback about RumorGuard here or send us a recent rumor you think we should cover at rumorguard@newslit.org.

Support news literacy by donating today.


Thursday, March 16, 2023

New AI development "still makes many of the errors of previous versions"

"The artificial intelligence research lab OpenAI on Tuesday launched the newest version of its language software, GPT-4, an advanced tool for analyzing images and mimicking human speech, pushing the technical and ethical boundaries of a rapidly proliferating wave of AI.

OpenAI’s earlier product, ChatGPT, captivated and unsettled the public with its uncanny ability to generate elegant writing, unleashing a viral wave of college essays, screenplays and conversations — though it relied on an older generation of technology that hasn’t been cutting-edge for more than a year.

...

In its blog post, OpenAI said GPT-4 still makes many of the errors of previous versions, including “hallucinating” nonsense, perpetuating social biases and offering bad advice. It also lacks knowledge of events that happened after about September 2021, when its training data was finalized, and “does not learn from its experience,” limiting people’s ability to teach it new things.'
Continue reading the article online (subscription maybe required)

New AI development "still makes many of the errors of previous versions"
New AI development "still makes many of the errors of previous versions"


Wednesday, November 3, 2021

New York Times: "Facebook, Citing Societal Concerns, Plans to Shut Down Facial Recognition System"

"Facebook plans to shut down its decade-old facial recognition system this month, deleting the face scan data of more than one billion users and effectively eliminating a feature that has fueled privacy concerns, government investigations, a class-action lawsuit and regulatory woes.

Jerome Pesenti, vice president of artificial intelligence at Meta, Facebook’s newly named parent company, said in a blog post on Tuesday that the social network was making the change because of “many concerns about the place of facial recognition technology in society.” He added that the company still saw the software as a powerful tool, but “every new technology brings with it potential for both benefit and concern, and we want to find the right balance.”

The decision shutters a feature that was introduced in December 2010 so that Facebook users could save time. The facial-recognition software automatically identified people who appeared in users’ digital photo albums and suggested users “tag” them all with a click, linking their accounts to the images. Facebook now has built one of the largest repositories of digital photos in the world, partly thanks to this software."
Continue reading the article online. (Subscription maybe required)
https://www.nytimes.com/2021/11/02/technology/facebook-facial-recognition.html

Facebook is shuttering a feature, introduced in December 2010, that automatically identified people who appeared in users’ digital photo albums.Credit...Carlos Barria/Reuters
Facebook is shuttering a feature, introduced in December 2010, that automatically identified people who appeared in users’ digital photo albums. Credit...Carlos Barria/Reuters


Tuesday, March 16, 2021

National News: CDC review of documents; AI and the bias issue

"Federal health officials have identified several controversial pandemic recommendations released during the Donald Trump administration that they say were “not primarily authored” by staff and don’t reflect the best scientific evidence, based on a review ordered by its new director.

The review identified three documents that had already been removed from the agency’s website: One, released in July, delivered a strong argument for school reopenings and downplayed health risks. A second set of guidelines about the country’s reopening was released in April by the White House and was far less detailed than what had been drafted by the CDC and the Federal Emergency Management Agency. A third guidance issued in August discouraged the testing of people without covid-19 symptoms even when they had contact with infected individuals. That was replaced in September after experts inside and outside the agency raised alarms."
Continue reading the article online (subscription may be required)


"Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.

Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.

The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines."
Continue reading the article online (subscription may be required)