Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Monday, April 15, 2024

Boston Globe: "Spotting a deepfake: Eight tips and tells"

"Deceptive deepfakes seem to be everywhere these days, making it harder than ever to sort the true from the false. While there’s no silver bullet to address the threat posed by generative AI, here are a few techniques to guard against disinformation.

1. Take your time, look closely
As humans, we are hardwired to focus on the face. But while many of today’s AI-image generators can create lifelike faces, it pays to spend a little time looking at other aspects of an image. AI is apt to cut corners and that’s where things can get weird. Look at the background. Does it make real-world sense? Does everything line up? How about people other than the image’s primary subject? Is there a phantom limb? Maybe a sixth finger?"
Continue reading the tips on how to detect deep fakes! (subscription maybe required)  https://www.bostonglobe.com/2024/04/11/arts/how-to-spot-deepfake-tips-ai/

Visitors can watch videos and guess if the images are real or fake. The MIT Museum's exhibit "AI: Mind the Gap" looks at deepfake video technology.LANE TURNER/GLOBE STAFF
Visitors can watch videos and guess if the images are real or fake. The MIT Museum's exhibit "AI: Mind the Gap" looks at deepfake video technology. LANE TURNER/GLOBE STAFF



Tuesday, July 11, 2023

What does AI get trained on? Copyrighted material, apparently without permission of the owner

Aside from the fact that AI is neither artificial nor "intelligent", ChatGPT was trained on info as of 2019 (4 years ago (and getting older each day)), and also, as claimed by this lawsuit, to include copyrighted data that was not permissioned for such use.
"Tools like ChatGPT, a highly popular chatbot, are based on large language models that are fed vast amounts of data taken from the internet in order to train them to give convincing responses to text prompts from users.

The lawsuit against OpenAI claims the three authors “did not consent to the use of their copyrighted books as training material for ChatGPT. Nonetheless, their copyrighted materials were ingested and used to train ChatGPT.” The lawsuit concerning Meta claims that “many” of the authors’ copyrighted books appear in the dataset that the Facebook and Instagram owner used to train LLaMA, a group of Meta-owned AI models.

The suits claim the authors’ works were obtained from “shadow library” sites that have “long been of interest to the AI-training community”.
Continue reading the article online ->
It is claimed that Sarah Silverman and the other authors’ works were obtained from ‘shadow library’ sites. Photograph: Rich Fury/Getty Images for THR
It is claimed that Sarah Silverman and the other authors’ works were obtained from ‘shadow library’ sites. Photograph: Rich Fury/Getty Images for THR


Wednesday, May 10, 2023

News Literacy Project: "News literacy in the age of AI"

Via the News Literacy Project:  
"Chatbots like ChatGPT that are built on generative artificial intelligence technologies — a set of algorithms that can “generate” content based on a large dataset — have captured the world’s imagination. Reactions to this great leap forward have ranged from enthusiastic to alarmed.

This technology is evolving rapidly and to keep up we must understand its powers and perils. Generative AI can help us automate mundane tasks or supercharge our online searches, but it could also be weaponized to create and spread disinformation at a dizzying pace.

AI will impact the digital landscape in ways we have yet to imagine. But we do know that news literacy skills and knowledge — like checking your emotions before you share content, consulting multiple sources or doing a quick reverse image search — will be more vital than ever."
The News Literacy Project has compiled a set of resources to define AI and to help determine how to identify it. https://newslit.org/ai/

News Literacy Project: "News literacy in the age of AI"
News Literacy Project: "News literacy in the age of AI"

Wednesday, March 22, 2023

Alert: Realistic-looking but fake AI images make news literacy skills more urgent


Fake images showing the supposed arrest of former President Donald Trump are circulating on social media, but they're generated using artificial intelligence and are not authentic.

Hello Franklin,

Fake images showing the supposed arrest of former President Donald Trump are circulating on social media, but they're generated using artificial intelligence and are not authentic. Share this RumorGuard entry now and let your friends and family know about this new kind of convincing (and often misleading) technology.

 
Several images seeming to show former President Donald Trump being arrested along with a tweet that reads,

These images of Trump being arrested are fakes generated by AI

After former President Donald Trump announced that he expected to get arrested for charges related to a hush money payment to adult film star Stormy Daniels, a flurry of images circulated on social media. These were generated using artificial intelligence and depicted Trump being taken into custody. Let's take a closer look at these AI-generated images.

Share this RumorGuard entry using these easy links:


 
Did someone forward you this email?
Join the RumorGuard by signing up for our email alerts here.
 

RumorGuard, a resource from the News Literacy Project, helps you stay on top of viral misinformation that is trending online with clear, concise explanations of credible fact-checks. It also provides insights and resources designed to help you take control of your social media feeds and help others avoid being exploited by falsehoods.

We want to hear from you! Provide your feedback about RumorGuard here or send us a recent rumor you think we should cover at rumorguard@newslit.org.

Support news literacy by donating today.


Thursday, March 16, 2023

New AI development "still makes many of the errors of previous versions"

"The artificial intelligence research lab OpenAI on Tuesday launched the newest version of its language software, GPT-4, an advanced tool for analyzing images and mimicking human speech, pushing the technical and ethical boundaries of a rapidly proliferating wave of AI.

OpenAI’s earlier product, ChatGPT, captivated and unsettled the public with its uncanny ability to generate elegant writing, unleashing a viral wave of college essays, screenplays and conversations — though it relied on an older generation of technology that hasn’t been cutting-edge for more than a year.

...

In its blog post, OpenAI said GPT-4 still makes many of the errors of previous versions, including “hallucinating” nonsense, perpetuating social biases and offering bad advice. It also lacks knowledge of events that happened after about September 2021, when its training data was finalized, and “does not learn from its experience,” limiting people’s ability to teach it new things.'
Continue reading the article online (subscription maybe required)

New AI development "still makes many of the errors of previous versions"
New AI development "still makes many of the errors of previous versions"


Wednesday, November 3, 2021

New York Times: "Facebook, Citing Societal Concerns, Plans to Shut Down Facial Recognition System"

"Facebook plans to shut down its decade-old facial recognition system this month, deleting the face scan data of more than one billion users and effectively eliminating a feature that has fueled privacy concerns, government investigations, a class-action lawsuit and regulatory woes.

Jerome Pesenti, vice president of artificial intelligence at Meta, Facebook’s newly named parent company, said in a blog post on Tuesday that the social network was making the change because of “many concerns about the place of facial recognition technology in society.” He added that the company still saw the software as a powerful tool, but “every new technology brings with it potential for both benefit and concern, and we want to find the right balance.”

The decision shutters a feature that was introduced in December 2010 so that Facebook users could save time. The facial-recognition software automatically identified people who appeared in users’ digital photo albums and suggested users “tag” them all with a click, linking their accounts to the images. Facebook now has built one of the largest repositories of digital photos in the world, partly thanks to this software."
Continue reading the article online. (Subscription maybe required)
https://www.nytimes.com/2021/11/02/technology/facebook-facial-recognition.html

Facebook is shuttering a feature, introduced in December 2010, that automatically identified people who appeared in users’ digital photo albums.Credit...Carlos Barria/Reuters
Facebook is shuttering a feature, introduced in December 2010, that automatically identified people who appeared in users’ digital photo albums. Credit...Carlos Barria/Reuters


Tuesday, March 16, 2021

National News: CDC review of documents; AI and the bias issue

"Federal health officials have identified several controversial pandemic recommendations released during the Donald Trump administration that they say were “not primarily authored” by staff and don’t reflect the best scientific evidence, based on a review ordered by its new director.

The review identified three documents that had already been removed from the agency’s website: One, released in July, delivered a strong argument for school reopenings and downplayed health risks. A second set of guidelines about the country’s reopening was released in April by the White House and was far less detailed than what had been drafted by the CDC and the Federal Emergency Management Agency. A third guidance issued in August discouraged the testing of people without covid-19 symptoms even when they had contact with infected individuals. That was replaced in September after experts inside and outside the agency raised alarms."
Continue reading the article online (subscription may be required)


"Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.

Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.

The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines."
Continue reading the article online (subscription may be required)