As organizations look to the post-pandemic future, many are planning a hybrid virtual model that combines remote work with time in the office. This sensible decision follows solid productivity increases during the pandemic. But not everyone agrees. Venture Beat brought this topic to us in their article, “How AI will drive the hybrid work environment.”
The future of remote work is a hotly debated topic, with employees pulling for more time away from the office and many employers preferring to keep workers where they can see them. While productivity may have gone up, many employees report feeling anxious about job retention/security and experiencing burnout.
Remote working is not without its drawbacks. Poor connectivity, potential security issues and a lack of established workflows/procedures were challenges to many businesses.
There are technologies available to help with this. Artificial intelligence (AI) has the capacity to vastly improve work-from-home environments, bringing much-needed support to communications, collaboration, workflow management and even security.
Unfortunately, most organizations have little visibility and knowledge of how AI systems make the decisions they do. Explainable AI allows users to comprehend and trust the results and output created by machine learning algorithms. “Explainable AI” is used to describe an AI model, its expected impact and potential biases. Why is this important? Because the results can have an impact on data security or safety.
Melody K. Smith
Sponsored by Access Innovations, changing search to found.