The conversation around AI is changing — and the emphasis on the augmentation of current workers, rather than the wholesale replacement of segments of the workforce, is a significant (and many would argue, necessary) shift. However, anxiety and fear are still tough contenders for those trying to usher in a new era of AI-assisted workplaces.
Zoom.ai is a chat-based productivity tool that helps employees automate everyday tasks including searching for files, scheduling meetings, and generating documents. In an interview with DX Journal, Sriubiskis said public opposition to AI remains a major stumbling block not just for technology companies, but for businesses around the world.
As the language around AI changes, it becomes obvious that people want change from the technology, but remain hesitant about the disruptive effect AI-based automation could bring to their industries.
As highlighted in a recent Forbes article, knowledge-based workers with tenure, who have developed their skill-set over a period of time, are acting along the lines of basic psychology when it comes to fear surrounding automation. Unfortunately, that push-back can severely stunt the success of digital transformation projects designed to improve the lives of workers throughout the company, not replace them.
“A lot of people are afraid that AI’s going to take their job away,” said Sriubiskis. “That’s because that’s the narrative that we’ve seen for so long. It’s now about shifting the narrative to: AI’s going to make your job better and give you more time to focus more on the things that you’ve been hired to do because you’re good at doing them. There are tons of websites online talking about whether your job’s going to be taken away by AI, but they never really talk about how people’s jobs are going to be improved and what things they won’t have to do anymore so they can focus on the things that actually matter.”
Buy-in requires tangible results
This general AI anxiety can seem like a big obstacle to companies looking to adopt AI — but there are important steps companies can take to ensure their AI on-boarding is done with greater understanding and effectiveness.
As startups and businesses look to break through the AI fear-mongering, they have to demonstrate measurable benefits to employees, showing how AI can make work easier. By building an understanding of how AI affects employees, showing them how it benefits them, and using that information to inspire confidence in the project, businesses can work to create a higher level of employee buy-in.
One of the simplest examples of how to demonstrate this kind of benefit comes from Zoom.ai’s digital assistant for the workplace. An immediately beneficial way AI can augment knowledge-based workers is by giving them back their time.According to McKinsey & Company research cited by Zoom.ai, knowledge workers spend 19 percent of their time — one day a week — searching for and gathering information, sequestered by app or database silos. By showing how the employee experience can be improved with the use of automated meeting scheduling or document retrieval, you generate employee buy-in, said Sriubiskis.
“For us, the greatest advantage is giving employees some of their time back, so they can be more effective in the role that they were hired to do. So if there’s a knowledge-based worker, and they’re an engineer for example, they shouldn’t be spending time booking meetings, generating documents, finding information or submitting IT tickets. Their time would be better spent putting it towards their engineering work. For an enterprise company, based on our cases, we estimate that we can give employees at least 10 hours back a month. That allows them to be more productive, increase their collaboration and their creativity, and the overall employee experience improves.”
Full comprehension of a problem leads to better implementation
Another way to ensure a greater level of employee confidence is to understand the core problem that AI could be used to solve. You can’t just throw AI at an issue, said Sriubiskis. The application of the AI solution has to make sense in the context of an identified problem.
“When a lot of companies talk about their current endeavours, they’re saying, ‘we’re exploring AI to do this.’ But they’re not actually understanding a core problem that their employees are facing. If you just try to throw a new technology at a problem you don’t fully understand, you’re not going to be as successful as you want. You might be disappointed in that solution, and people are going to be frustrated that they wasted time without seeing any results.”
This deliberate effort to understand a key problem before implementing a solution can drive to better outcomes. That’s why Zoom.ai has incorporated this kind of core observation into its process of on-boarding clients or approaching a new project.
“Before we do a proof-of-concept or a pilot now,” said Sriubiskis, “we require companies to do an interview with some of our product and our UI/UX team. That way, we can understand how they do things currently, but also so we can provide a quantitative metric. Qualitative is nice, but people also want to see the results, and make sure their work was worth it. We make sure to interview a whole bunch of users, clearly understand the problem, and make sure what we’re doing isn’t a barrier to what they’re actually trying to solve, it’s going to help it and help it more over time.”
These approaches are all about making the team of employees feel like an AI solution is working for them, leading to greater effectiveness of AI implementation to augment the workforce. It remains key, said Sriubiskis, to make sure employees can see the tangible benefits of the technology. Zoom.ai makes that employee experience a core part of their on-boarding process: “We report back to our users and tell them how many hours they’ve saved. So they see how the actual improvements are seen by them, not just by management or the company as a whole.”
The future is filled with AI. It’s just a question of making sure it helps, not hurts, human capital — and that a positive transition to AI tools prioritizes the employee experience along the way.
Why it’s not too late for your digital transformation journey
The conversation surrounding digital transformation has shifted well beyond questions of “should we,” to the affirmative “when we.”
Basically, the “why” has become “when.”
But a new study from Wipro Digital — a follow-up to the company’s 2017 survey about leadership within digital transformation — ultimately shows that it isn’t too late for companies that are only just beginning the journey to catch up.
Additionally, where the 2017 survey found that one in three enterprise CEOs felt digital transformation efforts were a waste of time, the updated report shows that number is now essentially at 0%.
While 87 percent of the 1,400 global enterprise C-suite leaders polled believe that companies who have started later than others still have a chance to climb to the level of their competitors, the biggest barriers identified are not the technology, but people-related issues.
Getting leadership on board
Taking a closer look, the biggest challenge comes down to sponsorship and business alignment, further emphasizing the importance of internal buy-in as a crucial first step to digital transformation:
- 54 percent cited inconsistent sponsorship from senior leadership
- 56 percent selected not being able to train their existing teams to change or use new technology, methods or processes
- 55 percent indicated needing better alignment with business stakeholders.
Ultimately, once these personnel issues are addressed, the technology becomes the greater barrier — specifically, its adaptation and subsequent training of the Lines of Business.
Our new #digitaltransformation survey of 1400 C-suite leaders found executive sponsorship & business alignment are significant barriers, particularly in the US and Canada. Time to #workdifferently Read more: https://t.co/x6fn3pduTI #infographic pic.twitter.com/7KKOUmTrUU
— Wipro Digital (@WiproDigital) September 4, 2019
“These results show that in the past two years, enterprise leaders have ensured that their organizations are capable of delivering ROI on their digital transformation efforts,” explains Rajan Kohli, president of Wipro Digital. “Leaders must align stakeholders and help their business units adapt to and leverage new technology, methods or processes.”
DX Journal covers the impact of digital transformation (DX) initiatives worldwide across multiple industries.
‘Ethical AI’ matters — the problem lies in defining it
News that Microsoft will invest around $1 billion to examine ethical artificial intelligence signals that the tech sector is thinking deeper about the ethics underlying transformative technologies. But what is ethical AI?
Microsoft is to invest around $1 billion into the OpenAI project, a group that has Elon Musk and Amazon as members. The partners are seeking to establish “shared principles on ethics and trust”. The project is considering two streams: cognitive science, which is linked to psychology and considers the similarities between artificial intelligence and human intelligence; and machine intelligence, which is less concerned with how similar machines are to humans, and instead is focused on how systems behave in an intelligent way.
With the growth of smart technology comes an increased reliance for humanity to place trust in algorithms, that continue to evolve. Increasingly, people are asking whether an ethical framework is needed in response. It would appear so, with some machines now carrying out specific tasks more effectively than humans can. This leads to the questions ‘what is ethical AI?’ and ‘who should develop ethics and regulate them?’
AI’s ethical dilemmas
We’re already seeing examples of what can go wrong when artificial intelligence is granted too much autonomy.Amazon had to pull an artificial intelligence operated recruiting tool after it was found to be biased against female applicants. A different form of bias was associated with a recidivism machine learning-run assessment tool that was biased against black defendants. The U.S. Department of Housing and Urban Development has recently sued Facebook due to its advertising algorithms, which allow advertisers to discriminate based on characteristics such as gender and race. For similar reasons Google opted not to renew its artificial intelligence contract with the U.S. Department of Defense for undisclosed ethical concerns.
These examples outline why, at the early stages, AI produces ethical dilemmas and perhaps why some level of control is required.
Designing AI ethics
Ethics is an important design consideration as artificial intelligence technology progresses. This philosophical inquiry extends from how humanity wants AI to make decisions and with which types of decisions. This is especially important where the is potential danger (as with many autonomous car driving scenarios); and extends to a more dystopian future where AI could replace human decision-making at work and at home. In-between, one notable experiment detailed what might happen if an artificially intelligent chatbot became virulently racist, a study intended to highlights the challenges humanity might face if machines ever become super intelligent.
While there is agreement that AI needs an ethical framework, what should this framework contain? There appears to be little consensus over the definition of ethical and trustworthy AI. A starting point is in the European Union document titled “Ethics Guidelines for Trustworthy AI“. With this brief, the key criteria are for AI to be democratic, to contribute to an equitable society, to support human agency, to foster fundamental rights, and to ensure that human oversight remains in place.
These are important concerns for a liberal democracy. But how do these principles stack up with threats to the autonomy of humans, as with AI that interacts and seeks to influencing behavior, as with the Facebook Cambridge Analytica issue? Even with Google search results, the output, which is controlled by an algorithm, can have a significant influence on the behavior of users.
Furthermore, should AI be used as a weapon? If robots become sophisticated enough (and it can be proven they can ‘reason’), should they be given rights akin to a human? The questions of ethics runs very deep.
It is grappling with some of these issues that led to the formation of OpenAI. According to Smart2Zero, OpenAI’s primary goal is to ensure that artificial intelligence can be deployed in a way that is both safe and secure, in order that the economic benefits can be widely distributed through society. Notably this does not capture all of the European Union goals, such as how democratic principles will be protected or how human autonomy will be kept central to any AI application.
As a consequence of Microsoft joining of the consortium, OpenAI will seek to develop advanced AI models built upon Microsoft’s Azure cloud computing platform. There are few specific details of how the project will progress.
Commenting on Microsoft’s big investment and commitment to the project, Microsoft chief executive Satya Nadella does not shed much light: “AI is one of the most transformative technologies of our time and has the potential to help solve many of our world’s most pressing challenges…our ambition is to democratize AI.”
Do we need regulation?
It is probable that the OpenAI project will place business first, and it will no doubt seek to reduce areas of bias. This in itself is key to the goals of the partners involved. For wider ethical issues it will be down to governments and academia to develop strong frameworks, and for these to gain public acceptance, and then for an appropriate regulatory structure to be put in place.
Digital transformation is causing C-suite tensions
Digital transformation is not only about technology, it’s also about changes of practices which need to diffuse through an organization’s culture. This needs to be begin at the top. A new report finds C-suite discord is a block to effective DX processes.
Rapidly undergoing effective digitally transformation puts a strain across C-suite relationships, according to a new survey of major enterprises. The report has been produced by business management software provider Apptio, and commissioned by the Financial Times. Titled “Disruption in the C-suite“, the report is draws on the findings of a survey conducted with 555 senior executives, (50 percent occupying CxO roles). The executives were based in major economic nations: Australia, Denmark, France, Germany, Italy, Japan, the Netherlands, Norway, Spain, Sweden, the UK and the U.S.
The report finds that while digital transformation leads to greater collaboration across different business functions, it can also create blurred responsibilities across the C-suite. This crossover carries the risk of key issues being missed; it also serves as a source of tension between top executives, as traditional functions merge and territorial disputes are triggered. As a sign of such differences, 71 percent of finance executives found the IT unit within the C-suite should be seeking greater influencing skills to better deliver the change their business requires.
Team deficiencies found in the survey included not having key performance indicators in place with to measure digital transformation progress. Also, the CFO was found to be the least deeply aligned member of the C-suite team, especially not being aligned with the CIO.
To overcome these divisions, the report recommends that organizations invest time in ‘bridging the trust gap’ between functions and seek to ease tensions, especially between the offices of the CIO and the CFO. An important factor is with establishing which function has accountability. Another measure that can be taken is with ensuing that data is more transparent and where key metrics are issued in ‘real-time’.
The report also charts how digital transformation is being fully embraced, as leaders at global brands are embracing processes and technologies like artificial intelligence, workplace reskilling, cloud computing, agile working and de-centralized decision-making.
Manufacturing2 months ago
IoT + AI = Operations Intelligence: A new equation for a new world of data
Manufacturing1 month ago
IoT + Digital Twin = Operations Intelligence: An Equation that Delivers Useful What-If Scenarios
Manufacturing1 month ago
What you need to know if you’re attending AVEVA World Summit
Manufacturing3 weeks ago
IoT + big data analytics = operations intelligence: An equation that draws a better picture
Financial Services2 months ago
Measures to put the digital transformation of banks back on track