Fake news, algorithms designed to manipulate content, social media robots influencing opinions – these are all things we have heard about as more information is revealed after the insurrection on the U.S. Capitol on January 6th. One emerging technology that is new to the list is artificial intelligence (AI) and because of this, some organizations are hesitant to move forward with any AI-based projects. Fortune brought this interesting news to our attention in their article, “Why companies are thinking twice about using artificial intelligence.”
The same machine learning technology that helps companies target people with online ads on social media apparently also helps people with nefarious intentions distribute propaganda and misinformation.
AI has been championed by many companies for its ability to predict sales, interpret legal documents and power more realistic customer chatbots. Now projects involving machine learning analyzing customer data in order to predict user behavior is raising concern around data and personal information privacy.
Melody K. Smith
Sponsored by Data Harmony, a unit of Access Innovations, the world leader in indexing and making content findable.