An international team of academic volunteers are challenging big tech’s hold on natural language processing (NLP). Nature brought us this interesting topic in their article, “Open-source language AI challenges big tech’s models.”
The collaboration is called BigScience and it launched an early version of the model earlier this month, but hopes that it will ultimately help to reduce harmful outputs of artificial intelligence (AI) language systems.
Natural language generation systems are neural networks that have been pre-trained on a large collection of writings. Using deep learning methods, the model is used to automatically create human-like text from a simple input prompt. The results can be very realistic and at times difficult to discern from something written by an actual person.
Unfortunately, the approach also frequently leads to toxic language generation, making it difficult to trust such systems for automated business uses. The system doesn’t understand the words it’s using; it only knows that people have used them in a previous similar context.
This is part of the larger problem of organizations having little knowledge of how AI systems make the decisions they do and how the results are applied in various fields. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms.
Melody K. Smith
Sponsored by Data Harmony, harmonizing knowledge for a better search experience.