Connect with us

Leadership

France and Canada collaborate on ethical AI

Avatar

Published

on

Share this:

Canadian Prime Minister Justin Trudeau and French President Emmanuel Macron have made a commitment to engage experts across all areas of research to better understand how to develop artificial intelligence technologies that benefit all.

The new collaboration was announced by Trudeau and Macron on June 7, 2018, just ahead of the turbulent G7 Summit which took place in Charlevoix, Quebec. The basis of the collaboration will be an independent expert group, who will invite specialists from both governments, together with internationally recognized scientists and representatives from industry. Interested members of social groups will also have an opportunity to take part.

Challenges and changes to society courtesy of AI

The new group will set out to identify the key challenges and opportunities that artificial intelligence promises, especially orientated towards developing social and economic benefits. The group will also outline some best practices, which will be designed to ensure that artificial intelligence fulfills this potential.

CIFAR sounds support

The decision to develop artificial intelligence for the benefit of all people worldwide has been applauded by CIFAR, which is a Canadian-based, global organization with nearly 400 fellows, scholars and advisors from 17 countries. The Canadian Institute for Advanced Research (CIFAR) has highlighted the emphasis upon ensuring that artificial intelligence is ethical and that human needs should be at the forefront of future developments, at the heart of the France-Canada agreement.

In a statement, Alan Bernstein, president and CEO of CIFAR said: “AI has the potential to change almost everything about how we work and live. We enthusiastically endorse the creation of an international study group charged with understanding emerging AI technologies and how to ensure they are beneficial. We look forward to working with our partners in Canada and internationally to support this commitment.”

CIFAR, which is based in Toronto, manages the $125 million federal Pan-Canadian Artificial Intelligence Strategy. With the announcement, Elissa Strome, who is the executive director of the Pan-Canadian Artificial Intelligence Strategy, noted how it “builds on Canada’s longstanding leadership in AI research and innovation and the vibrant social science and policy community in Canada.”

She adds: “We look forward to working with our partners at the three AI institutes in Edmonton, Toronto and Montreal and researchers across the country to support today’s declaration.”

Share this:

Leadership

Why it’s not too late for your digital transformation journey

Avatar

Published

on

Share this:

The conversation surrounding digital transformation has shifted well beyond questions of “should we,” to the affirmative “when we.” 

Basically, the “why” has become “when.”

But a new study from Wipro Digital — a follow-up to the company’s 2017 survey about leadership within digital transformation — ultimately shows that it isn’t too late for companies that are only just beginning the journey to catch up.

Additionally, where the 2017 survey found that one in three enterprise CEOs felt digital transformation efforts were a waste of time, the updated report shows that number is now essentially at 0%. 

While 87 percent of the 1,400 global enterprise C-suite leaders polled believe that companies who have started later than others still have a chance to climb to the level of their competitors, the biggest barriers identified are not the technology, but people-related issues.

Getting leadership on board

Taking a closer look, the biggest challenge comes down to sponsorship and business alignment, further emphasizing the importance of internal buy-in as a crucial first step to digital transformation:

  • 54 percent cited inconsistent sponsorship from senior leadership
  • 56 percent selected not being able to train their existing teams to change or use new technology, methods or processes
  • 55 percent indicated needing better alignment with business stakeholders.

Ultimately, once these personnel issues are addressed, the technology becomes the greater barrier — specifically, its adaptation and subsequent training of the Lines of Business.

“These results show that in the past two years, enterprise leaders have ensured that their organizations are capable of delivering ROI on their digital transformation efforts,” explains Rajan Kohli, president of Wipro Digital. “Leaders must align stakeholders and help their business units adapt to and leverage new technology, methods or processes.”

Share this:
Continue Reading

Culture

‘Ethical AI’ matters — the problem lies in defining it

Avatar

Published

on

AI
Share this:

News that Microsoft will invest around $1 billion to examine ethical artificial intelligence signals that the tech sector is thinking deeper about the ethics underlying transformative technologies. But what is ethical AI?

Microsoft is to invest around $1 billion into the OpenAI project, a group that has Elon Musk and Amazon as members. The partners are seeking to establish “shared principles on ethics and trust”. The project is considering two streams: cognitive science, which is linked to psychology and considers the similarities between artificial intelligence and human intelligence; and machine intelligence, which is less concerned with how similar machines are to humans, and instead is focused on how systems behave in an intelligent way.

With the growth of smart technology comes an increased reliance for humanity to place trust in algorithms, that continue to evolve. Increasingly, people are asking whether an ethical framework is needed in response. It would appear so, with some machines now carrying out specific tasks more effectively than humans can. This leads to the questions ‘what is ethical AI?’ and ‘who should develop ethics and regulate them?’

AI’s ethical dilemmas

We’re already seeing examples of what can go wrong when artificial intelligence is granted too much autonomy.Amazon had to pull an artificial intelligence operated recruiting tool after it was found to be biased against female applicants. A different form of bias was associated with a recidivism machine learning-run assessment tool that was biased against black defendants. The U.S. Department of Housing and Urban Development has recently sued Facebook due to its advertising algorithms, which allow advertisers to discriminate based on characteristics such as gender and race. For similar reasons Google opted not to renew its artificial intelligence contract with the U.S. Department of Defense for undisclosed ethical concerns.

These examples outline why, at the early stages, AI produces ethical dilemmas and perhaps why some level of control is required.

Designing AI ethics

Ethics is an important design consideration as artificial intelligence technology progresses. This philosophical inquiry extends from how humanity wants AI to make decisions and with which types of decisions. This is especially important where the is potential danger (as with many autonomous car driving scenarios); and extends to a more dystopian future where AI could replace human decision-making at work and at home. In-between, one notable experiment detailed what might happen if an artificially intelligent chatbot became virulently racist, a study intended to highlights the challenges humanity might face if machines ever become super intelligent.

While there is agreement that AI needs an ethical framework, what should this framework contain? There appears to be little consensus over the definition of ethical and trustworthy AI. A starting point is in the European Union document titled “Ethics Guidelines for Trustworthy AI“. With this brief, the key criteria are for AI to be democratic, to contribute to an equitable society, to support human agency, to foster fundamental rights, and to ensure that human oversight remains in place.

These are important concerns for a liberal democracy. But how do these principles stack up with threats to the autonomy of humans, as with AI that interacts and seeks to influencing behavior, as with the Facebook Cambridge Analytica issue? Even with Google search results, the output, which is controlled by an algorithm, can have a significant influence on the behavior of users.

Furthermore, should AI be used as a weapon? If robots become sophisticated enough (and it can be proven they can ‘reason’), should they be given rights akin to a human? The questions of ethics runs very deep.

OpenAI’s aims

It is grappling with some of these issues that led to the formation of OpenAI. According to Smart2Zero, OpenAI’s primary goal is to ensure that artificial intelligence can be deployed in a way that is both safe and secure, in order that the economic benefits can be widely distributed through society. Notably this does not capture all of the European Union goals, such as how democratic principles will be protected or how human autonomy will be kept central to any AI application.

As a consequence of Microsoft joining of the consortium, OpenAI will seek to develop advanced AI models built upon Microsoft’s Azure cloud computing platform. There are few specific details of how the project will progress.

Commenting on Microsoft’s big investment and commitment to the project, Microsoft chief executive Satya Nadella does not shed much light: “AI is one of the most transformative technologies of our time and has the potential to help solve many of our world’s most pressing challenges…our ambition is to democratize AI.”

Do we need regulation?

It is probable that the OpenAI project will place business first, and it will no doubt seek to reduce areas of bias. This in itself is key to the goals of the partners involved. For wider ethical issues it will be down to governments and academia to develop strong frameworks, and for these to gain public acceptance, and then for an appropriate regulatory structure to be put in place.

Share this:
Continue Reading

Culture

Digital transformation is causing C-suite tensions

Avatar

Published

on

Photo by Taylor Nicole on Unsplash
Share this:

Digital transformation is not only about technology, it’s also about changes of practices which need to diffuse through an organization’s culture. This needs to be begin at the top. A new report finds C-suite discord is a block to effective DX processes.

Rapidly undergoing effective digitally transformation puts a strain across C-suite relationships, according to a new survey of major enterprises. The report has been produced by business management software provider Apptio, and commissioned by the Financial Times. Titled “Disruption in the C-suite“, the report is draws on the findings of a survey conducted with 555 senior executives, (50 percent occupying CxO roles). The executives were based in major economic nations: Australia, Denmark, France, Germany, Italy, Japan, the Netherlands, Norway, Spain, Sweden, the UK and the U.S.

The report finds that while digital transformation leads to greater collaboration across different business functions, it can also create blurred responsibilities across the C-suite. This crossover carries the risk of key issues being missed; it also serves as a source of tension between top executives, as traditional functions merge and territorial disputes are triggered. As a sign of such differences, 71 percent of finance executives found the IT unit within the C-suite should be seeking greater influencing skills to better deliver the change their business requires.

Team deficiencies found in the survey included not having key performance indicators in place with to measure digital transformation progress. Also, the CFO was found to be the least deeply aligned member of the C-suite team, especially not being aligned with the CIO.

To overcome these divisions, the report recommends that organizations invest time in ‘bridging the trust gap’ between functions and seek to ease tensions, especially between the offices of the CIO and the CFO. An important factor is with establishing which function has accountability. Another measure that can be taken is with ensuing that data is more transparent and where key metrics are issued in ‘real-time’.

The report also charts how digital transformation is being fully embraced, as leaders at global brands are embracing processes and technologies like artificial intelligence, workplace reskilling, cloud computing, agile working and de-centralized decision-making.

Share this:
Continue Reading

Featured