Contact us

Level Intelligent Automation Solutions

Automate knowledge across your business, ensuring you remain agile, compliant and competitive.
Overview 

Human Resources

Automate knowledge to transform complex processes into into intuitive, contextual and real-time services.
Learn more >

Finance

Automate knowledge for increased efficiency, control, compliance and transparency.
Learn more >

Procurement

Automate knowledge to easily identify the correct procurement channel for goods and services.
Learn more >

Level Insights

Learn more about Level, our solutions, and the cognitive and AI space through our blog, eBooks, whitepapers, videos and case studies.

Blog

Read the latest knowledge automation and shared services insights from the Level community.
Explore blogs

Resources

Access the latest trends, challenges and thinking on the future of work and customer engagement.
Explore resources

Case Studies

Discover how companies automate knowledge for improved productivity, accuracy and engagement.
Read case studies

Community

Featured

Level Blog

2 min read

AI for the Straight Guy: Don’t believe the headlines

Dec 7, 2018 7:56:00 PM

Provocative headlines regarding AI are everywhere these days, but when you scratch beneath the surface, you’ll find that these are little more than clickbait.

Always read past the headline

AI has been called racist, sexist, homophobic and psychopathic in hyperbolic headlines so often that you’d be forgiven for believing it was some terrifying technology that will discriminate across all possible social and demographic lines. Here are some snippets of recent articles about Artificial Intelligence – within a few paragraphs, you learn the real reason behind the headline:

The media are well versed in exaggerated and misleading stories, and headlines like these aren’t limited to Artificial Intelligence, but I wanted to focus on this area as it is important not to let these stories affect your understanding of AI technology.

My chocolate cake tastes like chocolate?

They say you are what you eat but, in the case of AI, you are what you are programmed to be. If you make a cake and chocolate is on your list of ingredients, don’t be surprised when it tastes like chocolate. By the same token, if you put bad data into your AI tool, don’t be surprised when you get bad data out. Unfortunately, ‘Programmers used poor data and got poor results’ doesn’t make for a snappy, or eye-catching headline.

Many of the biases found in these algorithms aren’t placed there intentionally – most accidentally blossom over time through repeated patterns in the data. The perception is that algorithms start off impartial and neutral and somehow become racist, sexist etc. themselves. However, in reality, they are amplifying already existing prejudices that are prevalent in the technology industry and society as a whole. AI trained on bad data can itself turn bad. After all, “Machine Learning” is just a fancy way of saying “finding patterns in data”.

Fixing the issue

Creating unbiased AI software may not be as easy as just getting the code right. Experts have revealed that the solution can instead be found by making a number of significant changes across the industry. We explore what these might be in the second part of this post, which you can read here.

Jamie Stewart
Written by Jamie Stewart

Featured