People Innovation Excellence

Discovering the magic of AI conversations

A chatbot is software that can converse with you through two methods: auditory (you speak with it and listen to answers) or textual (you type what you want to say and read the answers). You may have heard of it under other names (conversational agent, chatterbot, talkbot, and others), but the point is that you may already use one on your smartphone, computer, or a special device. Siri, Cortana, and Alexa are all well-known examples. You may also exchange words with a chatbot when you contact a firm’s customer service by web or phone, or through an app on your mobile phone when using Twitter, Slack, Skype, or other applications for conversation.

Chatbots are big business because they help companies save money on customer service operators — maintaining constant customer contact and serving those customers — but the idea isn’t new. Even if the name is recent (devised in 1994 by Michael Mauldin, the inventor of the Lycos search engine), chatbots are considered the pinnacle of AI. According to Alan Turing’s vision, detecting a strong AI by talking with it shouldn’t be possible. Turing devised a famous conversation based test to determine whether an AI has acquired intelligence equivalent to a
human being.

You have a weak AI when the AI shows intelligent behavior but isn’t conscious like a human being. A strong AI occurs when the AI can really think as a human.

The Turing test requires a human judge to interact with two subjects through a computer terminal: one human and one machine. The judge evaluates which one is an AI based on the conversation. Turing asserted that if an AI can trick a human being into thinking that the conversation is with another human being, it’s possible to believe that the AI is at the human level of AI. The problem is hard because it’s not just a matter of answering properly and in a grammatically correct way, but also a matter of incorporating the context (place, time, and characteristics of the person the AI is talking with) and displaying a consistent personality (the AI should be like a real persona, both in background and attitude).

Since the 1960s, challenging the Turing test has proved to be motivation for developing chatbots, which are based on the idea of retrieval-based models. That is, the use of Natural Language Processing (NLP) processes language input by the human interrogator. Certain words or sets of words recall preset answers and feedback from chatbot memory storage.

NLP is data analysis focused on text. The algorithm splits text into tokens (elements of a phrase such as nouns, verbs, and adjectives) and removes any less useful or confounding information. The tokenized text is processed using statistical operations or machine learning. For instance, NLP can help you tag parts of speech and identify words and their meaning, or determine whether one text is similar to another.

Joseph Weizenbaum built the first chatbot of this kind, ELIZA, in 1966 as a form of computer psychological therapist. ELIZA was made of simple heuristics, which are base phrases to adapt to the context and keywords that triggered ELIZA to recall an appropriate response from a fixed set of answers. You can try an online version of ELIZA at http://www.masswerk.at/elizabot/. You might be surprised to read meaningful conversations such as the one produced by ELIZA with her
creator: http://www.masswerk.at/elizabot/eliza_test.html.

Retrieval-based models work fine when interrogated using preset topics because they incorporate human knowledge, just as an expert system does, thus they can answer with relevant, grammatically correct phrases. Problems arise when confronted with off-topic questions. The chatbot can try to fend off these questions by bouncing them back in another form (as ELIZA did)
and be spotted as an artificial speaker. A solution is to create new phrases, for instance, based on statistical models, machine learning, or even a pretrained RNN, which could be build on neutral speech or even reflect the personality of a specific person. This approach is called generative-based models and is the frontier of bots today because generating language on the fly isn’t easy.

Generative-based models don’t always answer with pertinent and correct phrases, but many researchers have made advances recently, especially in RNNs. As noted in previous characters, the secret is in the sequence: You provide an input sequence in one language and an output sequence in another language, as in a machine translation problem. In this case, you provide both input sequence and output sequence in the same language. The input is a part of a conversation, and the output is the following reaction.

Given the actual state of the art in chatbot building, RNNs work great for short exchanges, although obtaining perfect results for longer or more articulated phrases is more difficult. As with retrieval-based models, RNNs recall information they acquire, but not in an organized way. If the scope of the discourse is limited, these systems can provide good answers, but they degrade when the context is open and general because they would need knowledge comparable to what a
human acquires during a lifetime. (Humans are good conversationalists based on experience and knowledge.)

Data for training a RNN is really the key. For instance, Google Smart Reply, a chatbot by Google, offers quick answers to emails. The story at https://research.googleblog.com/2015/11/computer-respond-to-this-email.html tells more about how this system is supposed to work. In the real world, it tended to answer to most conversations with “I love you” because it was trained using biased examples. Something similar happened to Microsoft’s Twitter chatbot Tay, whose
ability to learn from interactions with users led it astray because conversations were biased and malicious (http://www.businessinsider.com/microsoftdeletes-racist-genocidal-tweets-from-ai-chatbot-tay-2016-3).

If you want to know the state of the art in the chatbot world, you can keep updated about yearly chatbot competitions in which Turing tests are applied to the current technology. For instance, the Lobner prize is the most famous one (http://www.loebner.net/Prizef/loebner-prize.html) and the right place to start. Though still unable to pass the Turing test, the most recent winner of the Lobner prize at the time of the writing of this book was Mitsuku, a software that can reason about
specific objects proposed during the discourse; it can also play games and even perform magic tricks (http://www.mitsuku.com/).

Source:

John Paul Mueller and Luca Massaron

Artificial Intelligence For Dummies®. 2018 by John Wiley & Sons, Inc., Hoboken, New Jersey


Published at :
Leave Your Footprint

    Periksa Browser Anda

    Check Your Browser

    Situs ini tidak lagi mendukung penggunaan browser dengan teknologi tertinggal.

    Apabila Anda melihat pesan ini, berarti Anda masih menggunakan browser Internet Explorer seri 8 / 7 / 6 / ...

    Sebagai informasi, browser yang anda gunakan ini tidaklah aman dan tidak dapat menampilkan teknologi CSS terakhir yang dapat membuat sebuah situs tampil lebih baik. Bahkan Microsoft sebagai pembuatnya, telah merekomendasikan agar menggunakan browser yang lebih modern.

    Untuk tampilan yang lebih baik, gunakan salah satu browser berikut. Download dan Install, seluruhnya gratis untuk digunakan.

    We're Moving Forward.

    This Site Is No Longer Supporting Out-of Date Browser.

    If you are viewing this message, it means that you are currently using Internet Explorer 8 / 7 / 6 / below to access this site. FYI, it is unsafe and unable to render the latest CSS improvements. Even Microsoft, its creator, wants you to install more modern browser.

    Best viewed with one of these browser instead. It is totally free.

    1. Google Chrome
    2. Mozilla Firefox
    3. Opera
    4. Internet Explorer 9
    Close