Connect with us

Technology

Accelerating the future of privacy through SmartData agents

Avatar

Published

on

Digital Life of Mind
Share this:

Imagine a future where you can communicate with your smartphone – or whatever digital extension of you exists at that time – through an evolved smart digital agent that readily understands you, your needs, and exists on your behalf to procure the things and experiences you want. What if it could do all this while protecting and securing your personal information, putting you firmly in control of your data?

Dr. George Tomko, University of Toronto

Dr. George Tomko, University of Toronto

Dr. George Tomko Ph.D, Expert-in-Residence at IPSI (Privacy, Security and Identity Institute) at the University of Toronto, Adjunct Professor in Computer Science at Ryerson University, and Neuroscientist, believes the time is ripe to address the privacy and ethical challenges we face today, and to put into place a system that will work for individuals, while delivering effective business performance and minimizing harms to society at large. I had the privilege of meeting George to discuss his brainchild, SmartData: the development of intelligent agents and the solution to data protection.

As AI explodes, we are witnessing incident after incident from technology mishaps to data breaches, to data misuse, and erroneous and even deadline outcomes. My recent post, Artificial Intelligence needs to Reset advances the need to take a step back, slow down the course of AI, and examine these events with a view to educate, fix, prevent and regulate towards effective and sustainable implementations.

Dr. Tomko is not new to the topic of privacy. He also invented Biometric Encryption as well as the Anonymous Database in the early 90’s.  His invention of SmartData was published SmartData: Privacy Meets Evolutionary Robotics, co-authored with Dr. Ann Cavoukian, former 3-term Privacy Commissioner in Ontario and inventor of Privacy by Design. This led to his current work, Smart Data Intelligent Agents, the subject of this article.

There is an inherent danger with the current model today. How the internet evolved was not its intended path. Tim Berners-Lee envisioned an open internet, owned by no one,

…an open platform that allows anyone to share information, access opportunities and collaborate across geographical boundaries…

This has been challenged by the spread of misinformation and propaganda online has exploded partly because of the way the advertising systems of large digital platforms such as Google or Facebook have been designed to hold people’s attention…

People are being distorted by very finely trained AIs that figure out how to distract them.

What has evolved is a system that’s failing. Tomko points to major corporations and digital gatekeepers who are accumulating the bulk of the world’s personal data:

He who has the personal data has the power, and as you accumulate more personal information (personally identifiable information, location, purchases, web surfing, social media), in effect you make it more difficult for competitors to get into the game. The current oligopoly of Facebook, Google, Amazon etc will make it more difficult for companies like Duck Duck Go and Akasha to thrive.

That would be okay if these companies were to utilize the data in accordance with the positive consent of the data subject for the primary purpose intended and protected it against data hacking. However, we know that’s not happening. Instead, they are using it for purposes not intended, selling the data to third parties, transferring it to government for surveillance, often without a warrant for probable cause.

Tomko asserts if Elon Musk and the late Stephen Hawking are correct about the potential of a dystopian-like AI popularized by Skynet in The Terminator series, this is likely if the AI has access to large amounts of personal data centralized into databases. While this implies an AI with malevolent intentions, humans are relentlessly innovative and Tomko argues for the importance of putting roadblocks in place before this happens.

Enter SmartData. This is the evolution of Privacy by Design, which shifts control from the organization and places it directly in the hands of the individual (the data subject).

SmartData empowers personal data by, in effect, wrapping it in a cloak of intelligence such that it now becomes the individual’s virtual proxy in cyberspace. No longer will personal data be shared or stored in the cloud as merely data, encrypted or otherwise; it will now be stored and shared as a constituent of the binary string specifying the neural weights of the entire SmartData agent. This agent proactively builds-in privacy, security and user preferences, right from the outset, not as an afterthought.

For SmartData to succeed, it requires a radical, new approach – with an effective separation from the centralized models which exist today.

Privacy Requires Decentralization and Distribution

Our current systems and policies present hurdles we need to overcome as privacy becomes the norm. The advent of Europe’s GDPR is already making waves and challenging business today. Through GDPR’s Article 20 (The Right to Data Portability) and Article 17 (The Right to Be Forgotten), the mechanisms to download personal data, plus the absolute deletion of data belie current directives and processes. Most systems ensure data redundancy, therefore data will always exist. Systems will need to evolve to fully comply with these GDPR mandates. In addition, customer transactions on private sites are collected, analyzed, shared and sometimes sold with a prevailing mindset that data ownership is at the organizational level.

Tomko explains the SmartData solution must be developed in an open source environment.

A company that says: “Trust me that the smart agent or app we developed has no “back-door” to leak or surreptitiously share your information,” just won’t cut it any longer. Open source enables hackers to verify this information. I believe that such a platform technology will result in an ecosystem that will grow, as long as there is a demand for privacy.

Within this environment, a data utility within the SmartData platform can request all personal data under GDPR-like regulations from the organizational database. As per the SmartData Security Structure, each subject’s personal data is then cleaned and collated into content categories e.g. A = MRI data, B = subscriber data. They will be de-identified, segmented, encrypted and placed in these locked boxes (files in the cloud) identified by categorized metatags. A “Trusted Enclave” like Intel’s SGX will be associated with each data subject’s personal data. The enclave will generate a public/private key pair and output the public key to encrypt the personal data by category.

Today, information is stored and accessed by location. If breaches occur, this practice increases the risk of exposure as information about data subjects are bundled together. By categorizing and storing personal information by content, this effectively prevents personal identity to be connected with the data itself. Only SmartData will know its data subjects and pointers to their unique personal information, accessed by a unique private key.

SmartData Security Structure, George Tomko

SmartData Security Structure, George Tomko

Ensuring Effective Performance while Maintaining Individual Privacy

Organizations who want to effectively utilize data to improve efficiencies and organizational performance will take a different route to achieve this. How do companies analyze and target effectively without exposing personal data? Tomko declares that using Federated Learning, to distribute data analytics such as Machine Learning(ML) is key:

Federated Learning provides an alternative to centralizing a set of data to train a machine learning algorithm, by leaving the training data at their source. For example, a machine learning algorithm can be downloaded to the myriad of smartphones, leveraging the smartphone data as training subsets. The different devices can now contribute to the knowledge and send back the trained parameters to the organization to aggregate.  We can also substitute smartphones with the secure enclaves that protect each data subject’s personal information.

Here’s how it would work: An organization wants to develop a particular application based on machine learning, which requires some category of personal data from a large number of data-subjects as a training set. Once it has received consent from the data subjects, it would download the learning algorithm to each subject’s trusted enclave. The relevant category of encrypted personal data would then be inputted, decrypted by the enclave’s secret key, and used as input to the machine learning algorithm. The trained learning weights from all data-subjects’ enclaves would then be sent to a master enclave within this network to aggregate the weights. This iteration would continue until the accuracies are optimized. Once the algorithm is optimized, the weights would then be sent to the organization. Tomko affirms,

 

The organization will only have the aggregated weights that had been optimized based on the personal data of many data subjects. They would not be able to reverse engineer and determine the personal data of any single data subject. The organization would never have access to anyone’s personal data, plaintext or otherwise, however, would be able to accomplish their data analytic objectives.

Federated Learning - Master Enclave, George Tomko

Federated Learning – Master Enclave, George Tomko

Building a Secure Personal Footprint in the Cloud

To ensure personal web transactions are secure, a person will instruct his SmartData agent to, for example, book a flight. The instruction is transmitted to the cloud using a secure protocol such as IPSec. This digital specification (a binary string) is decrypted and downloaded to one of many reconfigurable computers, which will interpret the instructions.

Natural language (NLP) would convert the verbal instructions into formal language, as well as the encoded communications, back and forth between subject and organization to facilitate the transaction, eliciting permission for passport and payment information. What’s different is the development of an agreement (stored on the Blockchain) that confirms consented terms of use between the parties. It also adds an incentive component through cryptocurrency that enables the data subject to be compensated for their information, if required. This mechanism would be used before every transaction to ensure transparency and expediency between parties.

Tomko realizes Blockchain has its limitations:

Everyone wants to remove the intermediary and the crypto environment is moving quickly. However, we can’t rely on Blockchain alone for privacy because it is transparent, and we can’t use it for computation because it is not scalable.

AI as it exists today is going through some stumbling blocks. Most experiments are largely within ANI: Artificial Narrow Intelligence, with models and solutions built for very specific domains, which cannot be transferred to adjacent domains. Deep Learning has its limitations. The goal of SmartData is to develop a smart digital personal assistant to serve as a proxy for the data-subject across varied transactions and contexts. Tomko illustrates,

With current Deep Learning techniques, different requests such as ‘Hey SmartData, buy me a copy of …” or “book me a flight to…” encompass different domains, and accordingly, require large sets of training data specific to that domain. The different domain-specific algorithms would then need to be strung together into an integrated whole, which, in effect, would become SmartData. This method would be lengthy, computationally costly and ultimately not very effective.

The promise of AI: to explain and understand the world around us and it has yet to reveal itself.

Tomko explains:

To date, standard Machine Learning (ML) cannot achieve incremental learning that is necessary for intelligent machines and lacks the ability to store learned concepts or skills in long-term memory and use them to compose and learn more sophisticated concepts or behaviors. To emulate the human brain to explain and generally model the world, it cannot be solely engineered. It has to be evolved within a framework of Thermodynamics, Dynamical Systems Theory and Embodied Cognition.

Embodied Cognition is a field of research that “emphasizes the formative role that both the agents’ body and the environment will play in the development of cognitive processes.” Put simply, these processes will be developed when these tightly coupled systems emerge from the real-time, goal-directed interactions between the agents and their environments, and in SmartData’s case, a virtual environment. Tomko notes the underlying foundation of intelligence (including language) is action.

Actions cannot be learned in the traditional ML way but must be evolved through embodied agents. The outcomes of these actions will determine whether the agent can satisfy the data subject’s needs.

Tomko references W. Ross Ashby, a cybernetics guru from the 50’s, who proposed that every agent has a set of essential variables which serve as its benchmark needs, and by which all of its perceptions and actions are measured against. The existential goal is to always satisfy its needs. By using this model (see below), we can train the agent to satisfy the data subject’s needs, and retain the subject’s code of ethics. Essential variables are identified that determine the threshold for low surprise or high surprise. Ideally, the agent should try to maintain a low-surprise and homeostatic state (within the manifold) to be satisfied. Anything outside the manifold, i.e., high surprise should be avoided. Tomko uses Ashby’s example of a mouse, who wants to survive. If a cat is introduced, a causal model of needs is built such that the mouse uses its sensory inputs compared to its benchmark needs to determine how it will act when a cat is present and maintain its life-giving states.

Apply this to individual privacy. As per Tomko,

The survival range will include parameters for privacy protection. Therefore, if the needs change or there is a modified environment or changing context the agent will modify its behavior automatically and adapt because its needs are the puppet-master.

This can be defined as a reward function. We reward actions that result in low surprise or low entropy. For data privacy, ideally, we want to avoid any potential actions that would lead to privacy violations equating to high surprise (and greater disorder).

 

Manifold of Needs, George Tomko

Manifold of Needs, George Tomko

Toronto’s Sidewalk Labs: The Need for Alternative Data Practices

At the time of writing this article, Dr. Ann Cavoukian, Expert-in-Residence at Ryerson University, former 3-term Privacy Commissioner, resigned as an advisor to Sidewalk Labs, in Toronto, a significant project powered by Alphabet, which aimed to develop one of the first smart cities of privacy in the world. Cavoukian’s resignation resulted in a media coup nationally because of her strong advocacy for individual privacy. She explains,

My reason for resigning from Sidewalk Labs is only the tip of the iceberg of a much greater issue in our digitally oriented society.  The escalation of personally identified information being housed in central databases, controlled by a few dominant players, with the potential of being hacked and used for unintended secondary uses, is a persistent threat to our continued functioning as a free and open society.

Organizations in possession of the most personal information about users tend to be the most powerful. Google, Facebook and Amazon are but a few examples in the private sector… As a result, our privacy is being infringed upon, our freedom of expression diminished, and our collective knowledge base outsourced to a few organizations who are, in effect,  involved in surveillance fascism. In this context, these organizations may be viewed as bad actors; accordingly, we must provide individuals with a viable alternative…

The alternative to centralization of personal data storage and computation is decentralization – place all personal data in the hands of the data-subject to whom it relates, ensure that it is encrypted, and create a system where computations may be performed on the encrypted data, in a distributed manner… This is the direction that we must take, and there are now examples of small startups using the blockchain as a backbone infrastructure, taking that direction.  SmartData, Enigma, Oasis Labs, and Tim Berners-Lee’s Solid platform are all developing methods to, among other things, store personal information in a decentralized manner.

Other supporters of Dr. George Tomko concur:

Dr. Don Borrett, a practicing neurologist with a background in evolutionary robotics, with a Masters from the Institute for the History and Philosophy of Science and Technology for the University of Toronto states:

By putting control of personal data back into the hands of the individual, the SmartData initiative provides a framework by which respect for the individual and responsibility for the collective good can be both accommodated.

Bruce Pardy is a Law Professor at Queen’s University, who has written on a wide range of legal topics: human rights, climate change policy, free markets, and economic liberty, among others and he declares:

The SmartData concept is not just another appeal for companies to do better to protect personal information. Instead, it proposes to transform the privacy landscape. SmartData technology promises to give individuals the bargaining power to set their own terms for the use of their data and thereby to unleash genuine market forces that compel data-collecting companies to compete to meet customer expectations.

Dr. Tomko is correct! The time is indeed ripe, and SideWalk Labs, an important experiment that will vault us into the future, is an example of the journey many companies must take to propel us into an inevitability where privacy is commonplace.

This originally appeared om Forbes.

Hessie Jones
Author: Hessie Jones

Hessie Jones is the Founder of ArCompany advocating AI readiness, education and the ethical distribution of AI. She is also Director for the International Council, Global Privacy and Security by Design.  As a seasoned digital strategist, author, tech geek and data junkie, she has spent the last 18 years on the internet at Yahoo!, Aegis Media, CIBC, and Citi, as well as tech startups including Cerebri, OverlayTV and Jugnoo. Hessie saw things change rapidly when search and social started to change the game for advertising and decided to figure out the way new market dynamics would change corporate environments forever: in process, in culture and in mindset. She launched her own business, ArCompany in social intelligence, and now, AI readiness. Through the weekly think tank discussions her team curated, she surfaced the generational divide in this changing technology landscape across a multitude of topics. Hessie is also a regular contributor to Towards Data Science on Medium and Cognitive World publications. This article solely represents my views and in no way reflects those of DXJournal. Please feel free to contact me h.jones@arcompany.co

Share this:

Technology

Where is the financial value in AI? Employing multiple human-machine learning approaches, say experts

According to a new study, only 10% of organizations are achieving significant financial benefits with AI.

Avatar

Published

on

Share this:

AI is everywhere these days — especially as we work to fight the spread of COVID-19

Even in the “before times,” AI was a hot topic that always found itself in the center of most digital transformation conversations. A new study from MIT Sloan Management Review, BCG GAMMA, and BCG Henderson Institute, however, prompts a crucial question:

Are You Making the Most of Your Relationship with AI?

Finding value

Despite the proliferation of the technology and increased investment, according to the report, just 10% of organizations are achieving significant financial benefits with AI. The secret ingredient in these success stories? “Multiple types of interaction and feedback between humans and AI,” which translated into a six-times better chance of amplifying the organization’s success with AI.

“The single most critical driver of value from AI is not algorithms, nor technology — it is the human in the equation,” affirms report co-author Shervin Khodabandeh.

 

View this post on Instagram

 

A post shared by MIT Sloan Management Review (@mitsmr)

From a survey of over 3,000 managers from 29 industries based in 112 countries — plus in-depth interviews with experts — the report outlined three investments organizations can make to maximize value:

  • The likelihood of achieving benefits increases by 19% with investment in AI infrastructure, talent, and strategy.
  • Scalability. When organizations think beyond automation as a use case, the likelihood of financial benefit increases by 18%.
  • “Achieving organizational learning with AI (drawing on multiple interaction modes between humans and machines) and building feedback loops between human and AI increases that likelihood by another 34%.”

According to report co-author Sam Ransbotham, at the core of successfully creating value from AI is continuous learning between human and machine:

“Isolated AI applications can be powerful. But we find that organizations leading with AI haven’t changed processes to use AI. Instead, they’ve learned with AI how to change processes. The key isn’t teaching the machines. Or even learning from the machines. The key is learning with the machines — systematically and continuously.” 

Continued growth

While just 1 in 10 organizations finds financial benefits with AI, 70% of respondents understand how it can generate value — up from 57% in 2017.

Additionally, 59% of respondents have an AI strategy, compared to 39% in 2017, the survey found. Finally, 57% of respondents say their organizations are “piloting or deploying” AI — not a huge increase from 2017 (46%). 

One of the biggest takeaways? According to co-author David Kiron, “companies need to calibrate their investments in technology, people, and learning processes.”

“Financial investments in technology and people are important, but investing social capital in learning is critical to creating significant value with AI.”

DX Journal Staff
Author: DX Journal Staff

DX Journal covers the impact of digital transformation (DX) initiatives worldwide across multiple industries.

Share this:
Continue Reading

Technology

Bringing DX to the food supply chain in a pandemic

In a new paper, supply chain stakeholders share how COVID-19 has affected the transformation of the sector.

Avatar

Published

on

Share this:

There’s little doubt that COVID-19 had a profound effect on the food supply chain.

As one example, just think back to roughly March of this year, when virus transmission was rapidly picking up speed. Remember the reports of food and beverage companies only producing their most popular or essential products? Or how it would take slightly longer than usual to restock certain products? What about the rush to integrate — or quickly improve the efficiency of — digital and e-commerce. 

Panning out a bit, think about food safety and quality professionals. The need to stay safe — and in many cases, stay at home — meant performing the very hands-on job of monitoring, auditing, inspecting at a distance, i.e. digitally. 

When the food supply chain was hit by storages, delays, breakdowns, and lockdowns, the end result was — like in so many sectors — a rapid digital transformation.

As The Food Safety Market — an SME-powered industrial data platform dedicated to boosting the competitiveness of European food certification — elaborates in a new discussion paper, “technology has played an important role in enabling business continuity in the new reality.”

The paper — Digital Transformation of Food Quality & Safety: How COVID-19 accelerates the adoption of digital technologies across the food supply chain — features industry experts from companies like Nestlé, Ferrero, PepsiCo, McCormick & Company, and more discussing the effects of the pandemic on the supply chain.

A few highlights from the paper:

  • John Carter, Area Europe Quality Director for Ferrero put the issue of food access into perspective at the start of his interview:

“The production of food defines our world. The effects of agriculture on our daily lives are so omnipresent that they can be easy to overlook; landscapes and societies are profoundly influenced by the need to feed our growing population. But much has been taken for granted. Only occasionally are we forced to consider: ‘where does our food come from?'”

  • Ellen de Brabander, Senior Vice President of R&D for PepsiCo provided insight on the cost benefits of digital transformation:

“The need for customization is a big driver for accelerating digital transformation and moving away from a ‘one size fits all’ approach. This means that the cost to develop and produce a product must be lower and digital technologies provide a clear opportunity here.” 

  • Clare Menezes, Director of Global Food Integrity for McCormick & Company brought up one area where digital tools need to go:

“There aren’t any areas where digital tools “fail”, but there is a need for tools that ‘prove out’ predictions around where the next integrity event will play out and how it could lead to quality or food safety failure. These tools are an obvious candidate for AI given the number of PESTLE factors that might come into play.” 

Want to read all of the interviews? Check out the paper here.

DX Journal Staff
Author: DX Journal Staff

DX Journal covers the impact of digital transformation (DX) initiatives worldwide across multiple industries.

Share this:
Continue Reading

Technology

Looking ahead at Digital Transformation in 2021

What’s coming next year? More massive growth in DX investment and IT, and a pivotal role for CIOs in economic recovery.

Avatar

Published

on

Share this:

What’s to come next year? According to IDC’s 2021 predictions, more massive growth in digital transformation investment and IT, and a pivotal role for CIOs in economic recovery.

Every year, the leading DX market research firm releases its series of FutureScape reports, which, according to IDC, “are used to shape IT strategy and planning for the enterprise by providing a basic framework for evaluating IT initiatives in terms of their value to business strategy now and in the foreseeable future.”

Let’s look at the highlights for three of the IDC FutureScape reports for 2021:

Digital Transformation Predictions

As we’ve firmly established, the global COVID-19 pandemic has largely accelerated DX efforts. 

According to IDC’s report, “direct digital transformation (DX) investment is still growing at a compound annual growth rate (CAGR) of 15.5% from 2020 to 2023.” It’s expected that investment will approach $6.8 trillion, as a result of companies quickly modifying their existing strategies in an effort to prioritize digital-first.

“Organizations with new digital business models at their core that are successfully executing their enterprise-wide strategies on digital platforms are well-positioned for continued success in the digital platform economy,” explains Shawn Fitzgerald, research director, Worldwide Digital Transformation Strategies. 

“Our 2021 digital transformation predictions represent areas of notable opportunity to differentiate your own digital transformation strategic efforts.”

IDC has also revealed their top 10 DX predictions. The top three are:

  • Accelerated DX Investments Create Economic Gravity: “The economy remains on course to its digital destiny with 65% of global GDP digitized by 2022.”

  • Digital Organization Structures and Roadmaps Mature: “By 2023, 75% of organizations will have comprehensive digital transformation (DX) implementation roadmaps, up from 27% today, resulting in true transformation across all facets of business and society.”

  • Digital Management Systems Mature: “By 2023, 60% of leaders in G2000 organizations will have shifted their management orientation from processes to outcomes, establishing more agile, innovative, and empathetic operating models.”

Read the full list here.

IT Predictions

When looking at IDC’s IT predictions for 2021, the firm explains that “to succeed in this period of change, CIOs and digitally driven C-suites need to focus on three areas over the next five years.”

First, they need to address gaps within IT that emerged in the immediate response to the pandemic. Up next, is making sure that accelerated IT and DX efforts are “locked in.” 

Finally, and as IDC says, most importantly, “they must seek opportunities to leverage new technologies to take advantage of competitive/industry disruptions and extend capabilities for business acceleration in the Next Normal.”

While COVID has shown that organizations can — and for the most part, did — respond and adapt quickly, to do so is a marker of future success in the digital economy, explained IDC Group Vice President for Worldwide Research, Rick Villars. 

“A large percentage of a future enterprise’s revenue depends upon the responsiveness, scalability, and resiliency of its infrastructure, applications, and data resources.”

IDC again released a top 10 list of predictions for this category. The top three are:

  • The acceleration of the shift to cloud-centric infrastructure
  • The increasing importance of the edge
  • “The Intelligent Digital Workspace”

Read the full list here.

CIO Agenda predictions

The pandemic has caused many business leaders to rethink their entire organizational structure and workflow. 

CIOs have faced many challenges this year — given the speed of digitization for most businesses — and they’ll need to be in the front seat of the upcoming recovery efforts as well, IDC reports.

“In a time of turbulence and uncertainty, CIOs and senior IT leaders must discern how IT will enable the future growth and success of their enterprise while ensuring its resilience,” said Serge Findling, vice president of Research for IDC’s IT Executive Programs.

IDC’s list of predictions starts with these three at the top:

  • “By 2022, 65% of CIOs will digitally empower and enable front-line workers with data, AI, and security to extend their productivity, adaptability, and decision-making in the face of rapid changes.”
  • Cyber attacks, trade wars, and a shaky economy will mean CIO struggle to adapt. IDC predicts that 30% of CIOs will not be able to protect trust.
  • The accumulation of technical debt accumulated during the pandemic will cause “financial stress, inertial drag on IT agility, and “forced march” migrations to the cloud.”

Read the full list here

DX Journal Staff
Author: DX Journal Staff

DX Journal covers the impact of digital transformation (DX) initiatives worldwide across multiple industries.

Share this:
Continue Reading

Featured