Connect with us

Technology

Accelerating the future of privacy through SmartData agents

Avatar

Published

on

Digital Life of Mind
Share this:

Imagine a future where you can communicate with your smartphone – or whatever digital extension of you exists at that time – through an evolved smart digital agent that readily understands you, your needs, and exists on your behalf to procure the things and experiences you want. What if it could do all this while protecting and securing your personal information, putting you firmly in control of your data?

Dr. George Tomko, University of Toronto

Dr. George Tomko, University of Toronto

Dr. George Tomko Ph.D, Expert-in-Residence at IPSI (Privacy, Security and Identity Institute) at the University of Toronto, Adjunct Professor in Computer Science at Ryerson University, and Neuroscientist, believes the time is ripe to address the privacy and ethical challenges we face today, and to put into place a system that will work for individuals, while delivering effective business performance and minimizing harms to society at large. I had the privilege of meeting George to discuss his brainchild, SmartData: the development of intelligent agents and the solution to data protection.

As AI explodes, we are witnessing incident after incident from technology mishaps to data breaches, to data misuse, and erroneous and even deadline outcomes. My recent post, Artificial Intelligence needs to Reset advances the need to take a step back, slow down the course of AI, and examine these events with a view to educate, fix, prevent and regulate towards effective and sustainable implementations.

Dr. Tomko is not new to the topic of privacy. He also invented Biometric Encryption as well as the Anonymous Database in the early 90’s.  His invention of SmartData was published SmartData: Privacy Meets Evolutionary Robotics, co-authored with Dr. Ann Cavoukian, former 3-term Privacy Commissioner in Ontario and inventor of Privacy by Design. This led to his current work, Smart Data Intelligent Agents, the subject of this article.

There is an inherent danger with the current model today. How the internet evolved was not its intended path. Tim Berners-Lee envisioned an open internet, owned by no one,

…an open platform that allows anyone to share information, access opportunities and collaborate across geographical boundaries…

This has been challenged by the spread of misinformation and propaganda online has exploded partly because of the way the advertising systems of large digital platforms such as Google or Facebook have been designed to hold people’s attention…

People are being distorted by very finely trained AIs that figure out how to distract them.

What has evolved is a system that’s failing. Tomko points to major corporations and digital gatekeepers who are accumulating the bulk of the world’s personal data:

He who has the personal data has the power, and as you accumulate more personal information (personally identifiable information, location, purchases, web surfing, social media), in effect you make it more difficult for competitors to get into the game. The current oligopoly of Facebook, Google, Amazon etc will make it more difficult for companies like Duck Duck Go and Akasha to thrive.

That would be okay if these companies were to utilize the data in accordance with the positive consent of the data subject for the primary purpose intended and protected it against data hacking. However, we know that’s not happening. Instead, they are using it for purposes not intended, selling the data to third parties, transferring it to government for surveillance, often without a warrant for probable cause.

Tomko asserts if Elon Musk and the late Stephen Hawking are correct about the potential of a dystopian-like AI popularized by Skynet in The Terminator series, this is likely if the AI has access to large amounts of personal data centralized into databases. While this implies an AI with malevolent intentions, humans are relentlessly innovative and Tomko argues for the importance of putting roadblocks in place before this happens.

Enter SmartData. This is the evolution of Privacy by Design, which shifts control from the organization and places it directly in the hands of the individual (the data subject).

SmartData empowers personal data by, in effect, wrapping it in a cloak of intelligence such that it now becomes the individual’s virtual proxy in cyberspace. No longer will personal data be shared or stored in the cloud as merely data, encrypted or otherwise; it will now be stored and shared as a constituent of the binary string specifying the neural weights of the entire SmartData agent. This agent proactively builds-in privacy, security and user preferences, right from the outset, not as an afterthought.

For SmartData to succeed, it requires a radical, new approach – with an effective separation from the centralized models which exist today.

Privacy Requires Decentralization and Distribution

Our current systems and policies present hurdles we need to overcome as privacy becomes the norm. The advent of Europe’s GDPR is already making waves and challenging business today. Through GDPR’s Article 20 (The Right to Data Portability) and Article 17 (The Right to Be Forgotten), the mechanisms to download personal data, plus the absolute deletion of data belie current directives and processes. Most systems ensure data redundancy, therefore data will always exist. Systems will need to evolve to fully comply with these GDPR mandates. In addition, customer transactions on private sites are collected, analyzed, shared and sometimes sold with a prevailing mindset that data ownership is at the organizational level.

Tomko explains the SmartData solution must be developed in an open source environment.

A company that says: “Trust me that the smart agent or app we developed has no “back-door” to leak or surreptitiously share your information,” just won’t cut it any longer. Open source enables hackers to verify this information. I believe that such a platform technology will result in an ecosystem that will grow, as long as there is a demand for privacy.

Within this environment, a data utility within the SmartData platform can request all personal data under GDPR-like regulations from the organizational database. As per the SmartData Security Structure, each subject’s personal data is then cleaned and collated into content categories e.g. A = MRI data, B = subscriber data. They will be de-identified, segmented, encrypted and placed in these locked boxes (files in the cloud) identified by categorized metatags. A “Trusted Enclave” like Intel’s SGX will be associated with each data subject’s personal data. The enclave will generate a public/private key pair and output the public key to encrypt the personal data by category.

Today, information is stored and accessed by location. If breaches occur, this practice increases the risk of exposure as information about data subjects are bundled together. By categorizing and storing personal information by content, this effectively prevents personal identity to be connected with the data itself. Only SmartData will know its data subjects and pointers to their unique personal information, accessed by a unique private key.

SmartData Security Structure, George Tomko

SmartData Security Structure, George Tomko

Ensuring Effective Performance while Maintaining Individual Privacy

Organizations who want to effectively utilize data to improve efficiencies and organizational performance will take a different route to achieve this. How do companies analyze and target effectively without exposing personal data? Tomko declares that using Federated Learning, to distribute data analytics such as Machine Learning(ML) is key:

Federated Learning provides an alternative to centralizing a set of data to train a machine learning algorithm, by leaving the training data at their source. For example, a machine learning algorithm can be downloaded to the myriad of smartphones, leveraging the smartphone data as training subsets. The different devices can now contribute to the knowledge and send back the trained parameters to the organization to aggregate.  We can also substitute smartphones with the secure enclaves that protect each data subject’s personal information.

Here’s how it would work: An organization wants to develop a particular application based on machine learning, which requires some category of personal data from a large number of data-subjects as a training set. Once it has received consent from the data subjects, it would download the learning algorithm to each subject’s trusted enclave. The relevant category of encrypted personal data would then be inputted, decrypted by the enclave’s secret key, and used as input to the machine learning algorithm. The trained learning weights from all data-subjects’ enclaves would then be sent to a master enclave within this network to aggregate the weights. This iteration would continue until the accuracies are optimized. Once the algorithm is optimized, the weights would then be sent to the organization. Tomko affirms,

 

The organization will only have the aggregated weights that had been optimized based on the personal data of many data subjects. They would not be able to reverse engineer and determine the personal data of any single data subject. The organization would never have access to anyone’s personal data, plaintext or otherwise, however, would be able to accomplish their data analytic objectives.

Federated Learning - Master Enclave, George Tomko

Federated Learning – Master Enclave, George Tomko

Building a Secure Personal Footprint in the Cloud

To ensure personal web transactions are secure, a person will instruct his SmartData agent to, for example, book a flight. The instruction is transmitted to the cloud using a secure protocol such as IPSec. This digital specification (a binary string) is decrypted and downloaded to one of many reconfigurable computers, which will interpret the instructions.

Natural language (NLP) would convert the verbal instructions into formal language, as well as the encoded communications, back and forth between subject and organization to facilitate the transaction, eliciting permission for passport and payment information. What’s different is the development of an agreement (stored on the Blockchain) that confirms consented terms of use between the parties. It also adds an incentive component through cryptocurrency that enables the data subject to be compensated for their information, if required. This mechanism would be used before every transaction to ensure transparency and expediency between parties.

Tomko realizes Blockchain has its limitations:

Everyone wants to remove the intermediary and the crypto environment is moving quickly. However, we can’t rely on Blockchain alone for privacy because it is transparent, and we can’t use it for computation because it is not scalable.

AI as it exists today is going through some stumbling blocks. Most experiments are largely within ANI: Artificial Narrow Intelligence, with models and solutions built for very specific domains, which cannot be transferred to adjacent domains. Deep Learning has its limitations. The goal of SmartData is to develop a smart digital personal assistant to serve as a proxy for the data-subject across varied transactions and contexts. Tomko illustrates,

With current Deep Learning techniques, different requests such as ‘Hey SmartData, buy me a copy of …” or “book me a flight to…” encompass different domains, and accordingly, require large sets of training data specific to that domain. The different domain-specific algorithms would then need to be strung together into an integrated whole, which, in effect, would become SmartData. This method would be lengthy, computationally costly and ultimately not very effective.

The promise of AI: to explain and understand the world around us and it has yet to reveal itself.

Tomko explains:

To date, standard Machine Learning (ML) cannot achieve incremental learning that is necessary for intelligent machines and lacks the ability to store learned concepts or skills in long-term memory and use them to compose and learn more sophisticated concepts or behaviors. To emulate the human brain to explain and generally model the world, it cannot be solely engineered. It has to be evolved within a framework of Thermodynamics, Dynamical Systems Theory and Embodied Cognition.

Embodied Cognition is a field of research that “emphasizes the formative role that both the agents’ body and the environment will play in the development of cognitive processes.” Put simply, these processes will be developed when these tightly coupled systems emerge from the real-time, goal-directed interactions between the agents and their environments, and in SmartData’s case, a virtual environment. Tomko notes the underlying foundation of intelligence (including language) is action.

Actions cannot be learned in the traditional ML way but must be evolved through embodied agents. The outcomes of these actions will determine whether the agent can satisfy the data subject’s needs.

Tomko references W. Ross Ashby, a cybernetics guru from the 50’s, who proposed that every agent has a set of essential variables which serve as its benchmark needs, and by which all of its perceptions and actions are measured against. The existential goal is to always satisfy its needs. By using this model (see below), we can train the agent to satisfy the data subject’s needs, and retain the subject’s code of ethics. Essential variables are identified that determine the threshold for low surprise or high surprise. Ideally, the agent should try to maintain a low-surprise and homeostatic state (within the manifold) to be satisfied. Anything outside the manifold, i.e., high surprise should be avoided. Tomko uses Ashby’s example of a mouse, who wants to survive. If a cat is introduced, a causal model of needs is built such that the mouse uses its sensory inputs compared to its benchmark needs to determine how it will act when a cat is present and maintain its life-giving states.

Apply this to individual privacy. As per Tomko,

The survival range will include parameters for privacy protection. Therefore, if the needs change or there is a modified environment or changing context the agent will modify its behavior automatically and adapt because its needs are the puppet-master.

This can be defined as a reward function. We reward actions that result in low surprise or low entropy. For data privacy, ideally, we want to avoid any potential actions that would lead to privacy violations equating to high surprise (and greater disorder).

 

Manifold of Needs, George Tomko

Manifold of Needs, George Tomko

Toronto’s Sidewalk Labs: The Need for Alternative Data Practices

At the time of writing this article, Dr. Ann Cavoukian, Expert-in-Residence at Ryerson University, former 3-term Privacy Commissioner, resigned as an advisor to Sidewalk Labs, in Toronto, a significant project powered by Alphabet, which aimed to develop one of the first smart cities of privacy in the world. Cavoukian’s resignation resulted in a media coup nationally because of her strong advocacy for individual privacy. She explains,

My reason for resigning from Sidewalk Labs is only the tip of the iceberg of a much greater issue in our digitally oriented society.  The escalation of personally identified information being housed in central databases, controlled by a few dominant players, with the potential of being hacked and used for unintended secondary uses, is a persistent threat to our continued functioning as a free and open society.

Organizations in possession of the most personal information about users tend to be the most powerful. Google, Facebook and Amazon are but a few examples in the private sector… As a result, our privacy is being infringed upon, our freedom of expression diminished, and our collective knowledge base outsourced to a few organizations who are, in effect,  involved in surveillance fascism. In this context, these organizations may be viewed as bad actors; accordingly, we must provide individuals with a viable alternative…

The alternative to centralization of personal data storage and computation is decentralization – place all personal data in the hands of the data-subject to whom it relates, ensure that it is encrypted, and create a system where computations may be performed on the encrypted data, in a distributed manner… This is the direction that we must take, and there are now examples of small startups using the blockchain as a backbone infrastructure, taking that direction.  SmartData, Enigma, Oasis Labs, and Tim Berners-Lee’s Solid platform are all developing methods to, among other things, store personal information in a decentralized manner.

Other supporters of Dr. George Tomko concur:

Dr. Don Borrett, a practicing neurologist with a background in evolutionary robotics, with a Masters from the Institute for the History and Philosophy of Science and Technology for the University of Toronto states:

By putting control of personal data back into the hands of the individual, the SmartData initiative provides a framework by which respect for the individual and responsibility for the collective good can be both accommodated.

Bruce Pardy is a Law Professor at Queen’s University, who has written on a wide range of legal topics: human rights, climate change policy, free markets, and economic liberty, among others and he declares:

The SmartData concept is not just another appeal for companies to do better to protect personal information. Instead, it proposes to transform the privacy landscape. SmartData technology promises to give individuals the bargaining power to set their own terms for the use of their data and thereby to unleash genuine market forces that compel data-collecting companies to compete to meet customer expectations.

Dr. Tomko is correct! The time is indeed ripe, and SideWalk Labs, an important experiment that will vault us into the future, is an example of the journey many companies must take to propel us into an inevitability where privacy is commonplace.

This originally appeared om Forbes.

Share this:

Technology

Society desperately needs an alternative web

Avatar

Published

on

Share this:

I see a society that is crumbling. The rampant technology is simultaneously capsizing industries that were previously the bread and butter of economic growth. The working man and woman have felt its effects as wages stagnate and employment opportunities remain fewer amidst a progressively automated economy. Increasing wage inequality and financial vulnerability have given rise to populism, and the domino effects are spreading.

People are angry. They demand fairness and are threatened by policies and outsiders that may endanger their livelihoods. This has caused a greater cultural and racial divide within and between nations. Technology has enabled this anger to spread, influence and manipulate at a much greater speed than ever before resulting in increasing polarization and a sweeping anxiety epidemic.

Globally, we are much more connected – this, to our detriment. We’ve witnessed both government and business leverage technology to spread disinformation for their gains. While regulators struggle to keep pace with these harms, the tech giants continue, unabated, to wield their influence and power to establish footprints that make both consumers and business increasingly dependent on their platforms and technology stacks. We cannot escape them, nor do we want to. Therein lays the concern…

This recent article, “The World is Choking on Data Pollution” offered a profound distillation of what we are witnessing today:

Progress has not been without a price. Like the factories of 200 years ago, digital advances have given rise to a pollution that is reducing the quality of our lives and the strength of our democracy… We are now face-to-face with a system embedded in every structure of our lives and institutions, shaping our society in ways that deeply impact our basic values.

Tim Berners Lee’s Intent for the World Wide Web has Run Off-Course:

Tim Berners Lee had this Pollyannaish view once upon a time that went like this: What if we could develop a web that was free to use for everyone and that would fuel creativity, connection, knowledge and optimism across the globe? He believed the internet to be a basic human right,

…That means guaranteeing affordable access for all, ensuring internet packets are delivered without commercial or political discrimination, and protecting the privacy and freedom of web users regardless of where they live.

Between 1989 and 1991, Tim Berners Lee led the development of the World Wide Web and unleashed the “language HTML (hypertext markup language) to create the webpages HTTP (used to create web pages), HTTP (HyperText Transfer Protocol), and URLs (Universal Resource Locators).”

The now ubiquitous WWW set a movement which has scaled tremendously, reinventing the way we do business, access and consume information, create connections and perpetuating an unrelenting mindset of innovation and optimism.

What has also transpired is a web of unbridled opportunism and exploitation, uncertainty and disparity. We see increasing pockets of silos and echo chambers fueled by anxiety, misplaced trust and confirmation bias. As the mainstream consumer lays witness to these intentions, we notice a growing marginalization that propels more to unplug from these communities and applications to safeguard their mental health. However, the addiction technology has produced cannot be easily remedied. In the meantime, people continue to suffer.

What has been most distressing are the effects of cyberbullying on our children. In 2016, The National Crime Prevention reported 43% of teens were subjects of cyberbullying, an increase of 11% from a decade prior. Some other numbing statistics:

  • “2017 Pediatric Academic Societies Meeting revealed the number of children admitted to hospitals for attempted suicide or expressing suicidal thoughts doubled between 2008 and 2015”
  • “Javelin Research finds that children who are bullied are 9 times more likely to be the victims of identity fraud as well.”
  • “Data from numerous studies also indicate that social media is now the favored medium for cyberbullies”

Big Tech: Too Big to Fail?

As the web evolved throughout the 90s we witnessed the emergence of hefty players like Google, Yahoo, Microsoft and later Facebook and Amazon. As Chris Dixon asserted:

During the second era of the internet, from the mid 2000s to the present, for-profit tech companies — most notably Google, Apple, Facebook, and Amazon (GAFA) — built software and services that rapidly outpaced the capabilities of open protocols. The explosive growth of smartphones accelerated this trend as mobile apps became the majority of internet use. Eventually users migrated from open services to these more sophisticated, centralized services. Even when users still accessed open protocols like the web, they would typically do so mediated by GAFA software and services.

Today, we appropriately apply a few acronyms to these giants: G-MAFIA (Google, Microsoft, Amazon, Facebook, IBM, Apple), or FAANG (Facebook, Apple, Amazon, Netflix, and Google) and now BAT (Baidu, Alibaba and Tencent). These players have created a progressively centralized internet that has limited competition and has stifled the growth of startups, which are more vulnerable to these tech giants. My discussion with a social network founder (who asked to remain nameless) spoke of one of the large platforms which continuously copied newly released features from their site, and they did so transparently because “they could.” He also witnessed a stall of user engagement and eventual churn. He was unable to compete effectively without the necessary resources and eventually relented, changing his business model and withdrawing to the cryptocurrency community to start anew.

Consider this: These eight players Facebook, Apple, Microsoft, Amazon, Google, Tencent, Baidu, and Alibaba are larger than the “market cap of every listed company in the Eurozone in Emerging Markets and in Japan.” G-MAFIA (excluding IBM) combined posted average returns in 2018 of 45% compared with 19% return among S&P500.  Now add the high degree of consolidation of the tech industry. Together FAANG has acquired 398 companies since 2007. The type of acquisitions has heightened interest from regulators and economists towards anti-trust regulation. Add to this list the highest-ever acquisition in history with IBM’s purchase of Red Hatat a reported $34 billion.

Big tech valuations continue to rise despite the sins illuminated by their technologies. There is this dichotomy that pits what’s good for consumers against what’s good for shareholders. We’ve derived some great experiences from these platforms, but we’ve also seen examples of invisible harms. However unintended, they surface as a result of the business mandate to prioritize user growth and engagement. These performance indicators are what drive employee performance and company objectives. When we think about the impact of big tech, their cloud environments and web hosting servers ensure our emails, our social presence, and our websites are available to everyone on the web. In essence, they control how the internet is run.

Amy Webb, Author of  “The Big Nine: How the Tech Titans and their Thinking Machines could Warp Humanity” refers not only to G-MAFIA but also BAT (the consortium that has led the charge in the highly controversial Social Credit system to create a trust value among its Chinese citizens). She writes:

We stop assuming that the G-MAFIA (Google, Apple, Facebook, IBM, and Amazon) can serve its DC and Wall Street masters equally and that the free markets and our entrepreneurial spirit will produce the best possible outcomes for AI and humanity

These Nine will shape the future of the internet, no doubt. Webb envisions several scenarios where China’s encroaching influence will enable an AGI to control the world much more pervasively than the Social Credit System, and where “democracy will end” in the United States. This is not implausible as we are already seeing signs of BAT’s increased fundingacross gaming, social media, fintech sectors, outpacing the US in investment.

Webb also foresees a future of stifling individual privacy where our personal information is locked in the operating systems of these tech giants, now functioning oligopolies, fueling a “digital caste system,” mimicking a familiar authoritarian system in China.

This future that Webb forecasts is conceivable. Today, beyond Cambridge Analytica and government’s alleged use of Facebook to manipulate voters and seed chaos, the damages, however divergent, are more pervasive and are more connected to one another than we realize. We have seen Amazon’s facial recognition technology used in law enforcement, which has been deemed ineffective and wrought of racial bias.

In the same vein, Buzzfeed reported the use of facial recognition being used in retail systems without the regard for user consent. We believed in Facebook’s initiative to safeguard our security through two-factor authentication, while they used our mobile numbers to target our behavior and weaken our privacy in the process. Both Facebook and Amazon have been known to have experimented with our data to manipulate our emotions. When Tiktok was fined $5.7 million for illegally collecting children’s data, it was only following the lead of its predecessors.

The biggest data breaches of all time have involved some of the largest tech companies like FB, Yahoo! and Uber as well as established corporations like Marriott and Equifax. The downstream effects are yet to be realized as this data is bought and sold on the dark web to the highest bidders. When 23andMe created the Personal Genome Service as an offer to connect people to their roots, it was, instead, exposed as “front for a massive information-gathering operation against an unwitting public.”

This epidemic continues. What is emerging are the hidden intentions behind the algorithms and technology that make it more difficult to trust our peers, our institutions and our government. While employees were up in arms because of Google’s “Dragonfly” censored search engine with China and its Project Maven’s drone surveillance program with DARPA, there exist very few mechanisms to stop these initiatives from taking flight without proper oversight. The tech community argues they are different than Big Pharma or Banking. Regulating them would strangle the internet.

Technology precedes regulation. This new world has created scenarios that are unaddressable under current laws. There is a prevailing legal threat unleashed through the GDPR, however, there are aspects of it that some argue that may indeed stifle innovation. However, it’s a start. In the meantime, we need to progress so systems and governance are in sync, and tech giants are held in check. This is not an easy task.

Who is responsible for the consequences of AI decisions? What mechanisms should be in place to ensure that the industry does not act in ways that go against the public interest? How can practitioners determine whether a system is appropriate for the task and whether it remains appropriate over time? These were the very questions we attempted to answer at the UK/Canada Symposium on Ethics and Artificial Intelligence. There are no clear answers today.

Back to Basics: Can we re-decentralize an increasingly centralized internet?

Here’s a thought! How do we move our increasingly digital world into a place where we all feel safe; where we control our data; where our needs and desires are met without dependence on any one or two institutions to give us that value? The decentralized web is a mindset and a belief in an alternative structure that can address some of the afflictions that have risen from data pollution. This fringe notion is slowly making its way back to mainstream:

A Web designed to resist attempts to centralize its architecture, services, or protocols [so] that no individual, state, or corporation can substantially control its use.

Is it possible to reverse the deterioration we are experiencing today? I spoke with individuals who are working actively within the values of the decentralized web and are building towards this panacea. Andrew Hill and Carson Farmer developed Textile.IO, a digital wallet for photos that are entirely controlled and owned by the user. Textile.io didn’t start out as a decentralized project. As Andrew recalls:

We started this project asking: what was the future of personal data going to look in the future? We didn’t like the answer at all. It seemed like the ubiquity of data with the speed of computing power and increasing complexity of algorithms would lead us to a state that wouldn’t be good for us: easily manipulated, easily tracked and personal lives easily invaded by third parties (government, individuals and companies)

Carson Farmer noted that GMAIL is fundamentally a better user experience because individuals didn’t need to run their own protocols or set up their own servers. This “natural” progression” to centralized technologies has served the Big Nine well.

Since then, it’s been this runaway because of the capitalist value behind data. They are building business models behind it and it will not go away overnight. By putting our blind trust into a handful of corporations who collect our data, we’ve created a run-away effect (some folks call it ‘data network effects’) where those companies now create value from our data, that is orders of magnitude greater than any new entrant into the market is capable of. This means that the ‘greatest’ innovation around our digital data is coming from only a handful of large companies.

However, people, en-masse, don’t understand this imminent threat. Few really understand the implications of cybersecurity breaches, nor the impact to individual welfare or safety from the data they willingly provide these networks. How much of this needs mainstream to care about it to achieve the scalability it requires? Hill argues that few will abandon technologies unless their values are subdued by risk. Hill explained our “signaled intentions actually differ from our intended behaviors.” For example, many would support legislation to reduced speed limits in certain areas to minimize deaths from auto accidents. However, engineering this feature into self-driving cars so they are unable to go faster, would be far more objectionable because it impedes us.

Adoption of a decentralized web cannot play by the old rules. New experiences and interactions that are outside of current norms needs to appeal to individual values, that enable trust and ease of adoption. Pulling users away from convention is not an easy task. However, emerging organizations are starting to build bridges into the old technology in an effort to re-decentralizeMatrix.org has created an open standard for decentralized communications. The Dat Project, largely funded mainly by donations provides a peer to peer file sharing protocol to create a more human-centered internet, without the risk of data being sold. For Textile.io their version of Instagram allows users to add a photo to their mobile application, which exists on your phone, with a privately encrypted copy existing on an IPFS (“a peer-to-peer protocol for sharing hypermedia in distributed file system”) node off your phone. No one sees the encrypted photo version unless you share the private keys to that photo. Textile has no view into the data, nor an intention of processing or keeping it. Handshake.org is a “permissionless and decentralized naming protocol to replace the DNS root file and servers with a public commons”, uncensorable and free of any gatekeeper. The Internet Archive, started by Brewster Kale, is a non-profit library that has cataloged over 400 billion web pages in the last 22 years, also digitizing all-things analog (books, music, movies), with the attempt to save web history and knowledge with free access to anyone.

Wendy Hanamura, Director of the Internet Archive is also the Founder of DWeb, a summit which started in 2016 bringing together builders and non-builders within the 4 levers of change: 1) laws 2) markets 3) norms and values 4) technology to advocate a better web. The intention was to do a moonshot for the internet and create “A web that’s locked open for good.” Why now? Wendy declared,

In the last few years we have woken up to see that the web is failing us. We turn to our screens for information we are getting, instead, deception in fake news, non reliable information, missing data. A lot of us in the sector feel we could do better. Technology is one path to doing better.

The prevailing vision of the Dweb:

A goal in creating a Decentralized Web is to reduce or eliminate such centralized points of control. That way, if any player drops out, the system still works. Such a system could better help protect user privacy, ensure reliable access, and even make it possible for users to buy and sell directly, without having to go through websites that now serve as middlemen, and collect user data in the process.

While it’s still early day, for at least a decade many players have chosen to become part of this movement to fix the issues that increasing centralization has created. From Diaspora to Bit Torrent, a growing list of technologies continue to develop alternatives for the DWeb: for storage, social nets, communication and collaboration apps, database, cryptocurrencies, etc. Carson sees the Dweb evolving and feels the time is ripe for this opportunity:

Decentralization gives us a new way forward: decentralized data storage, encryption based privacy, and P2P networks give us the tools to imagine a world where individuals own and control their personal data. In that future, all technologies can build and contribute to the same data network effect. That is exciting because it means we can create a world with explosive innovation and value generation from our data, as opposed to one limited by the production capacity and imagination of those few companies…

Can the decentralized web fix this? In a world where trust is fleeting, this may be a significant pathway forward but it’s still early day. The DWeb is reawakening. The emergence of its players sees tremendous promise however, the experiences will need to get better. Many things must work in tandem. The public needs to be more informed of the impact on their individual rights and welfare. Business needs to change its mindset. I was reminded by Dr. George Tomko, Expert in Residence at the University of Toronto, that if business can become more human, to be more compassionate

…and have the ability to feel a person’s pain or discomfort and to care enough by collaborating with others in alleviating her pain or discomfort… what emerges is a society of greater empathy, and a culture that yields more success

Regulation has to also be in lock-step with technology but it must be informed and well thought out to encourage competition and minimize costs to the consumer. More importantly, we must encourage more solutions to bring more data control to the user to give him/her the experiences they want out of the web, without fear of repercussions. This was the original promise of the internet.

This originally appeared on Forbes.

Share this:
Continue Reading

Healthcare

Canadian startup Deep Genomics uses AI to speed up drug discovery

Avatar

Published

on

genetics
Share this:

One of the biggest challenges pharmaceutical companies face is with the time taken to discover new drugs, develop them and get them to market. This lengthy process is punctuated with false starts. Startup Deep Genomics uses AI to accelerate the process.

Canadian startup Deep Genomics has been using artificial intelligence as a mechanism to speed up the drug discovery process, combining digital simulation technology with biological science and automation. The company has built a platform which uses machine learning to delve into the molecular basis of genetic diseases. The platform can analyze potential candidate drugs and identify those which appear most promising for further development by scientists.

The drug development process is dependent upon many factors, such as those relating to combining molecules (noting the interactions between hundreds of biological entities) and with the assessment of biomedical data. The data review required at these stages is highly complex. For these reasons, many researchers are seeking algorithms to help to extract data for analysis.

According to MaRS, Deep Genomics is addressing the time consuming element involved in the initial stages of drug discovery. The artificial intelligence system that the company has designed is able to process 69 billion molecules, comparing each one against around one million cellular processes. This type of analysis would have taken a conventional computer (or a team of humans) many years to run the necessary computations.

Within a few months, Deep Genomics AI has narrowed down the billions of combination to shortlist of 1,000 potential drugs. This process is not only faster, it narrows down the number of experiments that would need to be run, saving on laboratory tests and ensuring that only those drugs with a high chance of success are progressed to the clinical trial stage.

This type of system goes some way to addressing the lengthy typical time to market, which stands at around 14 years for a candidate drug; as well as reducing the costs for drug development, which run into the billions of dollars per drug.

Share this:
Continue Reading

Healthcare

Health service partners with Alexa to provide medical support

Avatar

Published

on

Alexa
Share this:

The U.K. National Health Service (NHS) is to partner with Amazon’s Alexa in order to provide health information. This is being piloted as an alternative to medical advice helplines and to reduce the number of medical appointments.

While the U.K. NHS is much admired around the world as a free-at-the-point-of-use healthcare system, health officials are always keen to find ways to reduce the strain on the systems, especially relating to medical visits where the process of booking appointments and waiting times for sessions with doctors can be lengthy. The average time to obtain a non-medical emergency appointment with a general medical practitioner is averaging around two weeks.

Although a non-emergency medical helpline is active (accessed by dialling 111), plus an online system, health officials are keen to explore other ways by which the U.K. population can access medical services. For this reason, NHS England is partnering with Amazon.

The use of Alexa voice technology not only offers an alternative service for digitally-savvy patients, it provides a potentially easier route for elderly and visually impaired citizens, as well as those who cannot access the Internet through a keyboard, to gain access to health information. This fits in with a new initiative from the U.K. Government called NHSX, which is about the NHS Long Term Plan intended to make more NHS services available digitally.

As PharmaPhorum reports, Alexa can now answer questions such as “Alexa, how do I treat a migraine?” and “Alexa, what are the symptoms of flu?”

Outside of the U.K., Amazon is working with several healthcare providers, including digital health coaching companies, in order to launch six new Alexa healthcare ‘skills’. According to Rachel Jiang, head of Alexa Health & Wellness: “Every day developers are inventing with voice to build helpful and convenient experiences for their customers. These new skills are designed to help customers manage a variety of healthcare needs at home simply using voice – whether it’s booking a medical appointment, accessing hospital post-discharge instructions, checking on the status of a prescription delivery and more.”

Share this:
Continue Reading

Featured