A BBC research team specializing in tracking scientific misinformation (scientific misinformation) has found that YouTube channels that use artificial intelligence to create videos may be spreading false scientific information, and that this information may appear to be “educational content” for children.
The research team identified more than 50 channels in more than 20 languages that spread misinformation disguised as science content (mathematics, engineering, science and technology).
These channels include pseudoscience, disinformation, and conspiracy theories, spreading messages such as the pyramids in Egypt powering buildings, denying human-caused climate change, and affirming the existence of aliens.
Our analysis shows that YouTube recommends “bad science” videos to kids with verified educational content.
Click for more money
Kyle Hill is a science teacher who uses YouTube and has a large young audience. A few months ago he started noticing these clips appearing in his viewing suggestions. He says followers of his channel have contacted him about recommended content that appeared legitimate but was full of misinformation. It seems like they are stealing and manipulating the exact content and ideas being promoted on the site.
The videos focused on wild claims with sensational comments, catchy headlines and dramatic imagery to attract viewers.
With more viewers, the channels get more revenue through screen advertising. YouTube accounts for about 45 percent of video ad revenue.
Creators also tag their videos as “educational content,” which means they’re more likely to be recommended for kids.
“Because I’m a science guy, I took this personally,” Hill says. “These channels seem to have hit on the right thing to maximize views with minimal effort.”
Check for fake clips
We found dozens of YouTube channels producing this type of material in languages including Arabic, Russian, Spanish, and Thai. Many of these channels have millions of subscribers. Their videos often reach millions of views.
Channel creators publish content quickly, with many posting multiple videos each day. Because they do this so quickly, the BBC research team suspects they are using AI software to develop them.
There are programs like Chat GPT and MidJourney that, when asked to create new content,for example, a user may request an image of a “black cat wearing a crown” and find that cat instead and get wasted. more time.
To verify this, we captured videos from each channel and used AI detection tools and expert analysis to assess the likelihood of visuals, narration and text being generated using AI.
Our analysis shows that most videos use AI to generate text and images, extract and process material from legitimate scientific videos.
The result is content that appears to be true, but is often untrue.
“Award-winning beer geek. Extreme coffeeaholic. Introvert. Avid travel specialist. Hipster-friendly communicator.”