An artificial intelligence system that was being developed at Facebook has created its own language. It developed a system of code words to make communication more efficient. Researchers shut the system down when they realized the AI was no longer using English.
The observations made at Facebook are the latest in a long line of similar cases. In each instance, an AI being monitored by humans has diverged from its training in English to develop its own language. The resulting phrases appear to be nonsensical gibberish to humans but contain semantic meaning when interpreted by AI “agents.”
Negotiating in a new language
As Fast Co. Design reports, Facebook’s researchers recently noticed its new AI had given up on English. The advanced system is capable of negotiating with other AI agents so it can come to conclusions on how to proceed. The agents began to communicate using phrases that seem unintelligible at first but actually represent the task at hand.
In one exchange illustrated by the company, the two negotiating bots, named Bob and Alice, used their own language to complete their exchange. Bob started by saying “I can i i everything else,” to which Alice responded “balls have zero to me to me to me…” The rest of the conversation was formed from variations of these sentences.
While it appears to be nonsense, the repetition of phrases like “i” and “to me” reflect how the AI operates. The researchers believe it shows the two bots working out how many of each item they should take. Bob’s later statements, such as “i i can i i i everything else,” indicate how it was using language to offer more items to Alice. When interpreted like this, the phrases appear more logical than comparable English phrases like “I’ll have three and you have everything else.”
English lacks a “reward”
The AI apparently realised that the rich expression of English phrases wasn’t required for the scenario. Modern AIs operate on a “reward” principle where they expect following a sudden course of action to give them a “benefit.” In this instance, there was no reward for continuing to use English, so they built a more efficient solution instead.
“Agents will drift off from understandable language and invent code-words for themselves,” Fast Co. Design reports Facebook AI researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
AI developers at other companies have observed a similar use of “shorthands” to simplify communication. At OpenAI, the artificial intelligence lab founded by Elon Musk, an experiment succeeded in letting AI bots learn their own languages.
AI language translates human ones
In a separate case, Google recently improved its Translate service by adding a neural network. The system is now capable of translating much more efficiently, including between language pairs that it hasn’t been explicitly taught. The success rate of the network surprised Google’s team. Its researchers found the AI had silently written its own language that’s tailored specifically to the task of translating sentences.
If AI-invented languages become widespread, they could pose a problem when developing and adopting neural networks. There’s not yet enough evidence to determine whether they present a threat that could enable machines to overrule their operators.
They do make AI development more difficult though as humans cannot understand the overwhelmingly logical nature of the languages. While they appear nonsensical, the results observed by teams such as Google Translate indicate they actually represent the most efficient solution to major problems.
Lenovo develops new AR headset called ThinkReality
Chinese technology firm Lenovo is making a serious pitch for a big slice of the augmented reality headset market through the launch of its ThinkReality A6 glasses.
The new headset, the latest under the company’s ThinkReality brand, has been called “small but mighty” by Lenovo, with the headset weighing around 380g (0.83lbs). The weight has been reduced by having the battery worn separately to the main unit.
The headset comes with a 40-degree diagonal field of view with 1080p resolution per eye in a 16:9 aspect ratio. The visuals are powered by an onboard Qualcomm Snapdragon 845 SOC. The device has two fisheye cameras on the front, as well as depth sensors and a 13-megapixel RGB sensor, plus an in-built microphone. One of the important features is that the headset can detect where the user is gazing to optimize resolution or navigation. The headset works over Wi-Fi but not 4G or 5G.
The device has an ecosystem that is capable of integrating with existing enterprise systems. Lenovo have said the ThinkReality A6 is compatible with existing augmented reality content, and it offers highly functional device management software. In terms of the operating system, this is Snapdragon 845 CPU running an Android-based platform, plus an Intel Movidius chipset with wave guide optics from Lumus.
Part of Lenovo’s strategy is to capture the growing business interest in augmented reality. This includes providing services for remote working. Lenovo’s strategy, according to Computer Business Review, includes developing hardware, software and services aimed at the 2.7 billion deskless workers globally,
The cost of the new headset has yet to be confirmed, although aim is for the price to be competitive and to be able to compete with rival products, like Microsoft’s HoloLens 2, which retails around $3,500.
Unskilled staff threaten banks’ ability to digitally transform
Only four percent of bank business and IT executives believe that the impact of technology on the pace of banking change has stayed the same over the past three years, while 96 percent said it has either significantly accelerated or accelerated, according to a new report from Accenture.
This technological disruption has a large effect on how banks operate, and it seems unlikely that the pace of change will decelerate anytime soon.
Here’s what it means: Some technologies will have a bigger impact than others, but it will require substantial work from banks to stay on top of them.
AI is the most promising technology to transform the banking space. Forty-seven percent of respondents said AI will have the biggest impact, followed by just 19 percent saying the same for quantum computing and 17 percent for distributed ledgers and blockchain. The disappointing outcome for blockchain appears to be in line with recent announcements from banks: Citi has abandoned its plans to launch a crypto and Bank of America’s tech and operations chief has expressed skepticism on the benefits of blockchain.
Banks’ workforces appear to be at different stages in terms of tech savviness.Seventy-four percent of banking respondents either agree or strongly agree that their employees are more digitally mature than their organization, resulting in a workforce waiting for their organization to catch up. However, 17 percent of respondents said that over 80 percent of their workforce will have to move into new roles requiring substantial reskilling in the next three years, compared with only 5 percent saying the same for the last three years.
Additionally, banks don’t know as much about third-party partners as they perhaps should. Over one in 10 banking respondents believe that their partners’ security posture is extremely or very important, as well as that their consumers trust their ecosystem partners. However, only 31 percent of respondents say they know that their ecosystem partners work as diligently as they do, while 57 percent of them simply trust their partners and 10 percent hope that they are diligent.
The bigger picture: Banks need to prepare for a future that will require them to put in a lot of resources, and some might struggle.
To make the most of AI opportunities in banking, incumbents need to upskill their workforces. While AI is the most promising technology to transform the banking space, this promise can only be realized if banks have the necessary talent in-house to adopt new AI solutions. As such, they should make it a priority to upskill their staff to make AI transformation a success — which may be difficult for those players that have to upskill a majority of their workforce.
And banks need to up their security efforts since open banking is becoming a global trend.Open banking makes working with third parties more frequent. This will force banks to double down on their security efforts, as a security breach with their partners could affect customer trust in a bank’s overall services. If employees aren’t up to date with new technologies — including application programming interfaces used for open banking, and AI — they can’t keep a bank’s network secure.
This article was originally published on Business Insider. Copyright 2019.
Artificial intelligence assesses PSTD by analysing voice patterns
Artificial intelligence can be used to assess whether a person is suffering from post-traumatic stress disorder through an analysis of the subject’s voice patterns, noting and processing any variations to predict the medical diagnosis.
The research is not only useful at close quarters, it also offers a potential telemedical approach to use applied to the assessment of patients located in remote areas and away from specialist medical facilities.
The study comes from the NYU Langone Health and NYU School of Medicine, where the researchers used a specially designed computer program to assess the stress levels of veterans by analyzing their voices. The key findings have been presented to the conference of the International Speech Communication Association.
Conventionally post-traumatic stress disorder by clinical interviews or self-assessment. This can prove to be a lengthy and variable process, which was partly the reason for training artificial intelligence as well as the remote medical reasons.
To develop the technology, the scientists used a statistical and machine learning tool termed ‘random forest’. This form of artificial intelligence has the ability to “learn” how to classify individuals based in learnt examples and using decision-making rules together with mathematical models.
The first step with the development of the technology involved recording standard long-term diagnostic interviews (which are classed as PTSD Scales under Clinician’s Checks) with 53 U.S. veterans from campaigns in Iraq and Afghanistan, who has been assessed as suffering from different forms of post-traumatic stress disorder. These were compared with interviews with 78 non-ill veterans.
Each of the recordings was added into the voice software and this produced a total of 40,526 short speech voices. These were used to train the artificial intelligence. Once trained, the technology was then tested with a new set of subjects, who were known to the researchers and some of who had been assessed as having post-traumatic stress disorder. The next aim is to introduce the artificial intelligence into the clinical setting.
Commenting on the study, lead scientist Dr. Charles R. Marmar notes: “Our findings suggest that speech characteristics can be used to diagnose this disease, and with further training and confirmation, they can be used in the clinic in the near future.”
The output from the study has been published in the journal Depression and Anxiety, with the research study titled “Speech‐based markers for posttraumatic stress disorder in US veterans.”
Leadership2 months ago
Calgary college launches new program in response to a changing workforce
Technology2 months ago
VMware and AWS unleash hybrid cloud options in Canada
Technology2 months ago
How public sector digital transformation can succeed
Technology2 months ago
How edge computing can boost business efficiency
Healthcare1 month ago
Tiny 3D-printed heart fabricated complete with blood vessels