Practical Eye care tips for work from home

Welcome back to the Saturday work from home series! Hope you guy are doing well and having a relaxed weekend. Today I want to shed some light on how to relax the one and only organ which has been…

Smartphone

独家优惠奖金 100% 高达 1 BTC + 180 免费旋转




Teaching a Machine through Conversations

In the Iron Man series, J.A.R.V.I.S. (Just A Rather Very Intelligent System) is a fictional artificial intelligence that functions as Tony Stark's assistant, running all the internal systems of Stark's buildings and the Iron Man suits. It can converse with Stark with considerable sophistication, at any length and is capable of responding in kind.

Today, the same fundamental pattern recognition techniques are able to produce state-of-the-art results in natural language understanding, face recognition, speech recognition and so on. It is possible to show a computer many examples of something so it can recognize or extract meaning out of it accurately. Just like the AI that Stark built for himself, it should be possible to teach an assistant how to perform specific tasks. But before teaching it to perform tasks that might be useful, can it be taught to talk to a human instead.

Conversations are powerful. We can express ideas, information, knowledge, thoughts, and feelings, as well as understand what is expressed by others. Effective two-way conversation requires one to understand what is being said. Understanding the intent is fundamental to an effective conversation. Machine learning approaches to understand human language and to handle contextual dialogues are constantly improving.

Although, it is still quite common that AI assistants commit errors. However, instead of pre-empting these errors and providing extensive conversational data to train the assistant to handle all eventualities, there should be a mechanism to hand-hold it to recover from errors and get back on track. In a way, an assistant should be able to learn from the feedback provided and get better at having a conversation.

Humans have been using machines to interact with them for quite some years now. It started off by configuring machines to answering a few simple structured commands which evolved into asking assistants to fetch information from a bunch of predefined questions and answers (FAQs). Enabling these assistants to have engaging conversations would require them to handle context in a conversation. Eventually, the AI assistant should be able to leverage user data to tailor conversation according to one’s preferences and offer an ultimate, personalized experience. At this level, an AI assistant will learn when it’s a good time to get in touch and proactively reach out based on this context. It could mean blurring the gap between an actual human assistant and artificial intelligence.

Turing proposed that a computer can be said to possess artificial intelligence if it can mimic human responses under specific conditions. The Turing Test has been criticized, in particular, because the nature of the questioning ("Yes" or "No" answers or pertained to a narrow field of knowledge) had to be limited in order for a computer to exhibit human-like intelligence.

As an antithesis of this idea of testing whether the computer program could successfully fool the questioner, we could help evolve the program to respond in a way that would convince the questioner that the responder is human.

Interactive Learning is a great way to train an AI assistant and generate training data while conversing with the chatbot. During the conversation, one can provide feedback for every prediction that was made - intent classification, entity extraction, context identification/management as well as response prediction. These conversations, where explicit feedback is given and corrections are made to the assistant's responses, can be used to train the multiple components that are used behind the scenes. Using interactive learning you can design paths that an AI assistant can follow to eventually lead to a solution. These can be seen as happy paths where the user is cooperating well and the system understands the user input correctly.

To give you an example, using this technique, we can train the assistant to answer the question “how’s the weather?” by fetching information about the current weather from an API and respond accordingly. However, the questions “what’s the weather like outside?” and “how’s the weather?” are both asking the same thing. The question “what’s the weather like outside?” can be asked in hundreds of ways. People can say identical things in numerous ways, and they may make mistakes when writing or speaking. They may use the wrong words, write fragmented sentences, and misspell or mispronounce words. Can the assistant be made to generalize and understand each of these intents instead of training it again and again? The answer is Natural Language Understanding (NLU). NLU helps in inferring what was meant from language even when the written or spoken language is flawed by leveraging AI algorithms to recognize attributes of language such as sentiment, semantics, context, and intent. It enables computers to understand the subtleties and variations of the language.

While understanding and deciphering a user message is crucial, there are some key capabilities that should be considered while designing a conversational AI assistant.

Understanding the Context of the conversation is key as it can either dramatically shorten conversations or make it possible to deal with ambiguous user input.

Different types of context with different contextual data that impact the flow of the conversation in our case

A contextual assistant can handle any user goal gracefully and help accomplish it as best as possible. This doesn’t mean that the assistant can answer everything but it at least handles the conversation to help the user, for example by handing it over to a human.

Building a contextual AI assistant which can converse and learn simultaneously is possible. The challenge is to design a machine learning framework to enable Collaborative Learning or Crowd-Sourced Learning. Why should the assistant not be trained by each one of us to solve our own personal use-cases and be shared with others? If this is made possible, the opportunities this can open are endless - Software developers would be teaching assistants to write code, Data scientist would be enabling them to train predictive models, Doctors would be using these assistants to triaging and personalized patient management and much much more.

Add a comment

Related posts:

The Dark Dreams

While the parsing of the light from shadow & back again goes on by people ordained with that stuff, with control Purpose goes gallivanting into the night with a sweating saviour running behind…

Trump Reportedly Came Very Close to Acknowledging His Election Loss in Rose Garden Press Conference

The Trump campaign and its supporters have claimed widespread election fraud has unfairly denied the president reelection. Many of these allegations have been aired publicly. But not yet known to all…

Happiness

Nobody has ever known to describe this emotion of happiness. People may define it with words like joy, fulfillment, merry, or satisfaction, but what I would like to say is that happiness is the…