isinformation is one of the biggest challenges facing social media platforms and their users today — with lasting, real-world consequences. TikTok, Gen Z’s favorite platform, is no exception.
While it may have started out as a place where young creators danced and launched Lil Nas X’s career, it’s since evolved into a platform where users of all ages are going to be entertained, to learn, and to inform. But alongside its meteoric growth, the spread of misinformation concerning topics such as the election and COVID-19, as well as conspiracy theories have become an increasing problem on the platform.
As calls for social media platforms to take a more active and firm role in curbing the spread of misinformation, Facebook, Twitter, and now TikTok have implemented a series of warning labels to do their part.
In a Feb. 3 post, TikTok said videos containing unverified information will receive warning labels that read: Caution: video flagged for unverified content. The user who posted the content will also receive a message saying a warning label was added to their video.
Users can easily report videos they believe violate TikTok’s community guidelines or promote false information, and moderators also work to keep the platform clear of harmful content. A spokesperson said TikTok works with third-party organizations like PolitiFact, Lead Stories, and SciVerify to help fact-check content primarily focused on elections, vaccines, and climate change.
If other users try to share these videos, they’ll receive a pop-up message warning them it hasn’t been verified. These videos will also not be promoted to users’ For You pages.
These changes are a part of TikTok’s attempts to advance media literacy, Jamie Favazza, TikTok’s director of communications for policy and safety, said in an NBC article.
Before the new labels were announced, the short-form video app teamed up with the National Association of Media Literacy Education for its “Be Informed” campaign. Producing a series of videos that focus on the differences between fact and opinion, how to analyze graphics, understand sources, and reflect on whether the content should be share, this program is Gen Z’s version of Millennials’ “Just Say No.”
These changes are first being rolled out to users in Canada and the U.S. before being released on a global scale.
This follows in the wake of other security concerns TikTok is facing — mostly surrounding moderating the ages of its large user base. Users must be 13 to join the app, and changes to privacy settings and defaults for users under 18 were put in place in mid-January.
In Italy, TikTok is in the process of re-verifying the age of every user following the death of a 10-year-old girl from Palermo who died after participating in a “blackout challenge.” Any user found to be under the age of 13 will have their account deleted and TikTok is considering “AI-based systems for age verification purposes.”
TikTok also introduced an in-app button that allows users to report users that seem to be below 13 and these are then reviewed by moderators in response to the tragic death.
But it’s not just Italy that has a large number of teenage (and underage) users. In 2020, the app announced that 60% of its monthly active users in the U.S. were between the ages of 16 and 24.
For a platform that has such a large number of young users, TikTok has even more responsibility to curb misinformation from reaching impressionable users.
So, is it enough? While the spread of misinformation is never going to be solved with just a single solution (and it will always be a problem), these labels are certainly a step in the right direction. According to a test undertaken by Irrational Labs, which focuses on behavioral science, TikTok videos with the warning label were shared decreased by 24%, and likes on unverified content were down by 7%.
The bottom line is social media platforms must adapt to shifting media landscapes do all they can to protect their users.