Connect with us

Leadership

The great buy-in: How to learn to love AI at work

Avatar

Published

on

Zoom.ai
Share this:

The conversation around AI is changing — and the emphasis on the augmentation of current workers, rather than the wholesale replacement of segments of the workforce, is a significant (and many would argue, necessary) shift. However, anxiety and fear are still tough contenders for those trying to usher in a new era of AI-assisted workplaces.

“It all comes down to what people want to change,” said Matas Sriubiskis, Growth Analyst at Zoom.ai, during the recent mesh conference meetup at Spaces in downtown Toronto.

Zoom.ai is a chat-based productivity tool that helps employees automate everyday tasks including searching for files, scheduling meetings, and generating documents. In an interview with DX JournalSriubiskis said public opposition to AI remains a major stumbling block not just for technology companies, but for businesses around the world.

As the language around AI changes, it becomes obvious that people want change from the technology, but remain hesitant about the disruptive effect AI-based automation could bring to their industries.

As highlighted in a recent Forbes article, knowledge-based workers with tenure, who have developed their skill-set over a period of time, are acting along the lines of basic psychology when it comes to fear surrounding automation. Unfortunately, that push-back can severely stunt the success of digital transformation projects designed to improve the lives of workers throughout the company, not replace them.

“A lot of people are afraid that AI’s going to take their job away,” said Sriubiskis. “That’s because that’s the narrative that we’ve seen for so long. It’s now about shifting the narrative to: AI’s going to make your job better and give you more time to focus more on the things that you’ve been hired to do because you’re good at doing them. There are tons of websites online talking about whether your job’s going to be taken away by AI, but they never really talk about how people’s jobs are going to be improved and what things they won’t have to do anymore so they can focus on the things that actually matter.”

Buy-in requires tangible results

This general AI anxiety can seem like a big obstacle to companies looking to adopt AI — but there are important steps companies can take to ensure their AI on-boarding is done with greater understanding and effectiveness.

As startups and businesses look to break through the AI fear-mongering, they have to demonstrate measurable benefits to employees, showing how AI can make work easier. By building an understanding of how AI affects employees, showing them how it benefits them, and using that information to inspire confidence in the project, businesses can work to create a higher level of employee buy-in.

One of the simplest examples of how to demonstrate this kind of benefit comes from Zoom.ai’s digital assistant for the workplace. An immediately beneficial way AI can augment knowledge-based workers is by giving them back their time.According to McKinsey & Company research cited by Zoom.ai, knowledge workers spend 19 percent of their time — one day a week — searching for and gathering information, sequestered by app or database silos. By showing how the employee experience can be improved with the use of automated meeting scheduling or document retrieval, you generate employee buy-in, said Sriubiskis.

“For us, the greatest advantage is giving employees some of their time back, so they can be more effective in the role that they were hired to do. So if there’s a knowledge-based worker, and they’re an engineer for example, they shouldn’t be spending time booking meetings, generating documents, finding information or submitting IT tickets. Their time would be better spent putting it towards their engineering work. For an enterprise company, based on our cases, we estimate that we can give employees at least 10 hours back a month. That allows them to be more productive, increase their collaboration and their creativity, and the overall employee experience improves.”

Full comprehension of a problem leads to better implementation

Another way to ensure a greater level of employee confidence is to understand the core problem that AI could be used to solve. You can’t just throw AI at an issue, said Sriubiskis. The application of the AI solution has to make sense in the context of an identified problem.

“When a lot of companies talk about their current endeavours, they’re saying, ‘we’re exploring AI to do this.’ But they’re not actually understanding a core problem that their employees are facing. If you just try to throw a new technology at a problem you don’t fully understand, you’re not going to be as successful as you want. You might be disappointed in that solution, and people are going to be frustrated that they wasted time without seeing any results.”

This deliberate effort to understand a key problem before implementing a solution can drive to better outcomes. That’s why Zoom.ai has incorporated this kind of core observation into its process of on-boarding clients or approaching a new project.

“Before we do a proof-of-concept or a pilot now,” said Sriubiskis, “we require companies to do an interview with some of our product and our UI/UX team. That way, we can understand how they do things currently, but also so we can provide a quantitative metric. Qualitative is nice, but people also want to see the results, and make sure their work was worth it. We  make sure to interview a whole bunch of users, clearly understand the problem, and make sure what we’re doing isn’t a barrier to what they’re actually trying to solve, it’s going to help it and help it more over time.”

These approaches are all about making the team of employees feel like an AI solution is working for them, leading to greater effectiveness of AI implementation to augment the workforce. It remains key, said Sriubiskis, to make sure employees can see the tangible benefits of the technology. Zoom.ai makes that employee experience a core part of their on-boarding process: “We report back to our users and tell them how many hours they’ve saved. So they see how the actual improvements are seen by them, not just by management or the company as a whole.”

The future is filled with AI. It’s just a question of making sure it helps, not hurts, human capital — and that a positive transition to AI tools prioritizes the employee experience along the way.

Share this:

Culture

‘Ethical AI’ matters — the problem lies in defining it

Avatar

Published

on

AI
Share this:

News that Microsoft will invest around $1 billion to examine ethical artificial intelligence signals that the tech sector is thinking deeper about the ethics underlying transformative technologies. But what is ethical AI?

Microsoft is to invest around $1 billion into the OpenAI project, a group that has Elon Musk and Amazon as members. The partners are seeking to establish “shared principles on ethics and trust”. The project is considering two streams: cognitive science, which is linked to psychology and considers the similarities between artificial intelligence and human intelligence; and machine intelligence, which is less concerned with how similar machines are to humans, and instead is focused on how systems behave in an intelligent way.

With the growth of smart technology comes an increased reliance for humanity to place trust in algorithms, that continue to evolve. Increasingly, people are asking whether an ethical framework is needed in response. It would appear so, with some machines now carrying out specific tasks more effectively than humans can. This leads to the questions ‘what is ethical AI?’ and ‘who should develop ethics and regulate them?’

AI’s ethical dilemmas

We’re already seeing examples of what can go wrong when artificial intelligence is granted too much autonomy.Amazon had to pull an artificial intelligence operated recruiting tool after it was found to be biased against female applicants. A different form of bias was associated with a recidivism machine learning-run assessment tool that was biased against black defendants. The U.S. Department of Housing and Urban Development has recently sued Facebook due to its advertising algorithms, which allow advertisers to discriminate based on characteristics such as gender and race. For similar reasons Google opted not to renew its artificial intelligence contract with the U.S. Department of Defense for undisclosed ethical concerns.

These examples outline why, at the early stages, AI produces ethical dilemmas and perhaps why some level of control is required.

Designing AI ethics

Ethics is an important design consideration as artificial intelligence technology progresses. This philosophical inquiry extends from how humanity wants AI to make decisions and with which types of decisions. This is especially important where the is potential danger (as with many autonomous car driving scenarios); and extends to a more dystopian future where AI could replace human decision-making at work and at home. In-between, one notable experiment detailed what might happen if an artificially intelligent chatbot became virulently racist, a study intended to highlights the challenges humanity might face if machines ever become super intelligent.

While there is agreement that AI needs an ethical framework, what should this framework contain? There appears to be little consensus over the definition of ethical and trustworthy AI. A starting point is in the European Union document titled “Ethics Guidelines for Trustworthy AI“. With this brief, the key criteria are for AI to be democratic, to contribute to an equitable society, to support human agency, to foster fundamental rights, and to ensure that human oversight remains in place.

These are important concerns for a liberal democracy. But how do these principles stack up with threats to the autonomy of humans, as with AI that interacts and seeks to influencing behavior, as with the Facebook Cambridge Analytica issue? Even with Google search results, the output, which is controlled by an algorithm, can have a significant influence on the behavior of users.

Furthermore, should AI be used as a weapon? If robots become sophisticated enough (and it can be proven they can ‘reason’), should they be given rights akin to a human? The questions of ethics runs very deep.

OpenAI’s aims

It is grappling with some of these issues that led to the formation of OpenAI. According to Smart2Zero, OpenAI’s primary goal is to ensure that artificial intelligence can be deployed in a way that is both safe and secure, in order that the economic benefits can be widely distributed through society. Notably this does not capture all of the European Union goals, such as how democratic principles will be protected or how human autonomy will be kept central to any AI application.

As a consequence of Microsoft joining of the consortium, OpenAI will seek to develop advanced AI models built upon Microsoft’s Azure cloud computing platform. There are few specific details of how the project will progress.

Commenting on Microsoft’s big investment and commitment to the project, Microsoft chief executive Satya Nadella does not shed much light: “AI is one of the most transformative technologies of our time and has the potential to help solve many of our world’s most pressing challenges…our ambition is to democratize AI.”

Do we need regulation?

It is probable that the OpenAI project will place business first, and it will no doubt seek to reduce areas of bias. This in itself is key to the goals of the partners involved. For wider ethical issues it will be down to governments and academia to develop strong frameworks, and for these to gain public acceptance, and then for an appropriate regulatory structure to be put in place.

Share this:
Continue Reading

Culture

Digital transformation is causing C-suite tensions

Avatar

Published

on

Photo by Taylor Nicole on Unsplash
Share this:

Digital transformation is not only about technology, it’s also about changes of practices which need to diffuse through an organization’s culture. This needs to be begin at the top. A new report finds C-suite discord is a block to effective DX processes.

Rapidly undergoing effective digitally transformation puts a strain across C-suite relationships, according to a new survey of major enterprises. The report has been produced by business management software provider Apptio, and commissioned by the Financial Times. Titled “Disruption in the C-suite“, the report is draws on the findings of a survey conducted with 555 senior executives, (50 percent occupying CxO roles). The executives were based in major economic nations: Australia, Denmark, France, Germany, Italy, Japan, the Netherlands, Norway, Spain, Sweden, the UK and the U.S.

The report finds that while digital transformation leads to greater collaboration across different business functions, it can also create blurred responsibilities across the C-suite. This crossover carries the risk of key issues being missed; it also serves as a source of tension between top executives, as traditional functions merge and territorial disputes are triggered. As a sign of such differences, 71 percent of finance executives found the IT unit within the C-suite should be seeking greater influencing skills to better deliver the change their business requires.

Team deficiencies found in the survey included not having key performance indicators in place with to measure digital transformation progress. Also, the CFO was found to be the least deeply aligned member of the C-suite team, especially not being aligned with the CIO.

To overcome these divisions, the report recommends that organizations invest time in ‘bridging the trust gap’ between functions and seek to ease tensions, especially between the offices of the CIO and the CFO. An important factor is with establishing which function has accountability. Another measure that can be taken is with ensuing that data is more transparent and where key metrics are issued in ‘real-time’.

The report also charts how digital transformation is being fully embraced, as leaders at global brands are embracing processes and technologies like artificial intelligence, workplace reskilling, cloud computing, agile working and de-centralized decision-making.

Share this:
Continue Reading

Leadership

Calgary college launches new program in response to a changing workforce

Businesses in Alberta have seen an upswing in the need for trained IT professionals, and with the launch of a new Information Technology Systems diploma this fall, Bow Valley College is prepared to provide the talent.

Avatar

Published

on

Photo courtesy Bow Valley College
Share this:

This article is a sponsored post by:

Bow Valley College

Back when floppy disks and dial-up internet were the height of technology in the office, concepts like 3D printers and self-checkout machines were pure science fiction.

It’s only been 20 years since then, but the world has since gone through a digital transformation that’s impacting businesses everywhere.

In a 2016 survey conducted by the global enterprise software company IFS, 86 per cent of senior business leaders from 20 different countries said that this digital transformation will play a key role in their market in the following five years.

This shift into a digital marketplace has also affected what kind of skills employers need, and Calgary’s Bow Valley College is working to provide the training needed to fill those in-demand roles.

Training rooted in industry demand

With the launch of the new Information Technology Systems (ITS) Diploma this fall, students will be given the most up-to-date IT education to provide a skilled workforce to businesses across Alberta.

Jeff Clemens, program coordinator and instructor at Bow Valley College, played a role in creating the ITS program, and said the process started with consulting industry professionals across the province. All of the companies consulted said they were in need of more trained IT experts to support the technology that keeps them running.

“Industry demand was a big reason why we launched this program,” said Clemens. “The main feedback we got from consulting with people was: ‘We need more graduates.’ Even our own IT staff here at Bow Valley College are saying, ‘When will you be getting these graduates, because we need more people’.”

Hector Henriquez is a desktop analyst in Bow Valley College’s IT department and said he’s also noticed an influx of companies in the city searching for IT professionals over the past few years.

“Nowadays, having IT is more and more essential,” said Henriquez, “Even the basic services that everyone takes for granted, like internet and email and printing, they need to be maintained and updated and secured. You can’t run a business now without IT.”

Entry-level positions lead to exciting careers in tech

During consultations, Clemens said that businesses specifically pointed to a gap in finding people to fill entry-level IT positions. Many only wanted people in entry-level positions for approximately a year, ultimately looking to move them into something more specialized, like the growing need for cyber security.

“The move toward cloud computing and the focus on cyber security and data security is reflected in the number of jobs that are now in the market,” said Phil Ollenberg, Team Lead of Student Recruitment at Bow Valley College.

“There are now self-checkouts, so there are fewer actual cashiers, but there are IT professionals and data analysis professionals in the background who are supporting that technology — and those are higher paid jobs.”

Ollenberg added that the need for IT seems to be clear to students too, as the two-year ITS diploma already had applicants before it was even officially announced.

“Our prospective learners are seeking this career out,” he said. “They’re looking for what they know will be a guaranteed job.”

When the first students graduate from the ITS program in 2021, Clemens is confident that they’ll be ready to take on the industry demands. With solution-based training in the latest cloud and security software, they’ll be prepared to tackle the next technological advancement — even if it seems as futuristic as 3D printing did in 1999.

“With IT, you can’t just sit back and expect that things will stay the same,” Clemens continued. “This program is very hands-on. We’re giving them the base, but teaching them that the base will change, and that’s OK because they’ll still have that ability to learn and come up with solutions.”

For more information on the ITS program, visit the Bow Valley College website.

Share this:
Continue Reading

Featured