Connect with us

Technology

Artificial intelligence needs to reset

Avatar

Published

on

Share this:

I had the chance to interview my colleague at ArCompany, Karen Bennet, a seasoned engineering executive in platform technology, open and closed source systems and artificial intelligence technology. A former engineering lead from Yahoo!, and part of the original team who brought Red Hat to success, Karen evolved with the technological revolution, utilizing AI in expert systems in her early IBM days, and is currently laying witness to the rapid experimentation in machine learning and deep learning. Our discussions about the current state of AI have culminated into this article.

It’s difficult to navigate AI amidst all the hype. The promises of AI, for the most part, have not come to fruition. AI is still emerging and has not become the pervasive force that has been promised. Consider the compelling stats that validate excitement in the AI hype:

  • 14X increase in the number of active AI startups since 2000
  • Investment into AI start-ups by VCs has increased 6X since 2000
  • The share of jobs requiring AI skills has grown 4.5X since 2013

As of 2017, Statista put out these findings:

As of last year, only 5% of businesses worldwide have incorporated AI extensively into their processes and offerings, 32% have not yet adopted, and 22% do not have plans to.

Statista: Adoption level of artificial intelligence (AI) in business organizations worldwide, as of 2017

Filip Pieniewski confirmed in his recent post on Venturebeat: “The AI winter is well on its way”:

We are now in the middle of 2018 and things have changed. Not on the surface yet — the NIPS conference is still oversold, corporate PR still has AI all over its press releases, Elon Musk still keeps promising self-driving cars, and Google keeps pushing Andrew Ng’s line that AI is bigger than electricity. But this narrative is beginning to crack.

We touted the claims of the autonomous driving car. Earlier this spring the death of a pedestrian to a self-driving vehicle raised alarms that went beyond the technology and called to question the ethics or lack thereof behind the decisions of an automated system. The trolley problem is not a simple binary choice between the life of one person to save 5 people but rather evolves into a debate of conscience, emotion and perception that now complicates the path to which a reasonable decision can be made by a machine. The conclusion from this article states:

But the dream of a fully autonomous car may be further than we realize. There’s a growing concern among AI experts that it may be years, if not decades before self-driving systems can reliably avoid accidents.

To use history as a predictor, both cloud and the dot net industries took about 5 years before they started impacting people in a significant way, and almost 10 years before these industries influenced major shifts in the market. We are envisioning a similar timeline for Artificial Intelligence. As Karen explains,

To enable adoption by everyone, a product needs to be in place, one that is scalable and one that can be used by everyone–not just data scientists. This product will need to take into account the data lifecycle of capturing data, preparing it, training models and predicting. With data being stored in the cloud, data pipelines can continuously extract and prepare them to train the models which will make the predictions. The models need to continuously improve from new training data, which, in turn, will keep the models relevant and transparent. That is the objective and the promise.

Building AI Proof of Concepts with No Significant Use Cases

Both Karen and I have come from technology and AI start-ups. What we’ve witnessed and what we’ve realized among discussion with peers within the AI community is the widespread experimentation across a multitude of business issues, which tend to stay in the labs.

This recent article substantiates the widespread AI pilots that are more common today:

Vendors of AI technology are often incentivized to make their technology sound more capable than it is – but hint at more real-world traction than they actually have… Most AI applications in the enterprise are little more than ‘pilots.’ Primarily, vendor companies that sell marketing solutions, healthcare solutions and finance solutions in artificial intelligence are simply test-driving the technology. In any given industry, we find that of the hundreds of vendor companies selling AI software and technology, only about one in three will actually have the requisite skills to do artificial intelligence in the first place.

VCs are realizing they may not see a return on their investments for some time. However, ubiquitous experimentation with very few models seeing daylight is just one of the reasons why AI is not ready for prime time.

Can Algorithms be Accountable?

We’ve heard of AI “black-box,” a current approach that has no visibility into how decisions are made. This practice runs in the face of banks and large institutions that have compliance standards and policies that mandate accountability. With systems operating as black boxes, there may be an inherent trust put in algorithms as long as the creation of these algorithms have been reviewed and have met some standards by critical stakeholders. This notion has been quickly disputed given the overwhelming evidence of faulty algorithms in production and the unexpected and harmful outcomes that result from them. Many of our simple systems operate as black boxes beyond the scope of any meaningful scrutiny because of intentional corporate secrecy, the lack of adequate education and understanding how to critically examine the inputs, the outcomes and most importantly, why these results occurred. Karen concurs,

The AI industry today is at a very early stage of being enterprise-ready. AI is very useful and ready for discovery and aiding in parsing through significant amounts of data, however, it still requires human intervention as a guide to evaluate and act on the data and their outcomes.

Karen clarifies that machine learning techniques today enable data to be labelled to identify insights. However, as part of this process, if some the data are erroneously labelled, or if there is not enough data representation, or if there are problematic data signifying bias, bad decision-making results are likely to occur. She also attests current processes continue to be refined:

Currently, AI is all about decision support, to provide insights into a form for which business can draw conclusions. In the next phase of AI, which automates actions from the data, there are additional issues that need to be addressed like bias, explainability, privacy, diversity, ethics, and continuous model learning.

Karen illustrates an example of an AI model making mistakes is seen when image captioning exposes the knowledge learned by training on images labelled with the objects they contain. This suggests that having a common sense world model of objects and people is required for an AI product to truly understand. A model only exposed to the limited number of labelled objects and limited variety in the training set will limit the efficacy of this common sense world model. Research into determining how a model treats its inputs and reaches its conclusions, in human understandable terms, is needed for enterprise. Amazon’s release of Rekognition, its facial recognition technology is an example of a technology currently in production and licensed for use while noticeable gaps exist in its effectiveness. According to a study released by the ACLU:

…the technology managed to confuse photos of 28 members of Congress with publicly available mug shots. Given that Amazon actively markets Rekognition to law enforcement agencies across the US, that’s simply not good enough.

Joy Buolamwini, and MIT graduate and Founder of Algorithmic Justice League in this latest interview called for a moratorium on this technology stating it was ineffective and needed more oversight, and has appealed for more government standards into these types of systems before they are publicly released.

AI’s Major Impediments: Mindset, Culture and Legacy

Having to transform from legacy systems is the top barrier to implement AI into many organizations today. Mindset and culture are elements of these legacy systems that provide a systemic view into the established process, values, and business rules that have dictated, not only how organizations operate, but also why these ingrained elements will create significant hurdles for business, especially when things are currently humming nicely. Therefore, there is no real incentive to dismantle infrastructures at the moment.

AI is a component of business transformation and while that topic has gained as much buzz as the AI hype, the investment and commitment required to make significant changes are met with hesitation. We’ve heard from companies willing to experiment on specific use cases but are unprepared for the requirements to train, re-engineer process, and revamp governance and corporate policies. For larger organizations who are compelled to make these significant investments, the question shouldn’t be one of return on investment, but rather, sustainable competitive advantage.

The Problems with Data Integrity

AI today needs massive amounts of data to be able to produce meaningful results but is unable to leverage experiences from another application. While Karen argued there is work in progress to overcome these limitations, the transfer of learnings is needed before models can be applied in a scalable way. There are scenarios, however, where AI can be used effectively today, such as revealing insights in images, voice, video and being able to translate languages.

Companies are learning that focus should be on:

1) diversity in the data, which includes proper representation across populations

2) ensuring diverse experiences, perspectives and thinking into the creation algorithms

3) prioritizing quality of the data over than quantity

These are important especially as bias is introduced and trust and confidence in data degrade. For example, Turkish is a gender-neutral language, but the AI model in Google translator incorrectly predicts the gender when translating to English. As well, cancer spotting AI image recognition is only trained on fair-skinned people. From the computer vision example above, Joy Buolamwini tested these AI technologies and realized they worked more effectively on male vs. female, and on lighter vs. darker skin. The “error rates were as low as 1% on males and as high as 35% on dark females.” These issues occur because of the failure to use diverse training data. Karen concedes,

The concept of AI is simple but the algorithms get smarter by ingesting more and more real-world data, however, being able to explain the decisions becomes extremely difficult. The data may be continuously changing and AI models require filters to prevent incorrect labelling such as an image of a black man being labelled as a gorilla or a panda becoming labelled as a gibbon. Enterprises relying on faulty data to make decisions will lead to ill-informed results.

Fortunately, given AI’s nascency, very few organizations are making significant business decisions from the data today. From what we’ve witnessed most solutions produce mainly product recommendations and personalizing marketing communication. Any wrong conclusions that result from these have less societal impacts… at least for now.

Using data to make business decisions is not new, but what has changed is the exponential increase in volume and mix of structured and unstructured data being used. AI enables us to use data from their source continuously and obtain insight much faster. This is an opportunity for businesses that have the capacity and structure to handle data volume from diverse sources. However, for other organizations, the masses of data can represent a risk because of the divergent sources and formats that make it more difficult to transform the information: emails, system logs, web pages, customer transcripts, documents, slides, informal chats, social networks and exploding rich media like images and video. Data transformation continues to be a stumbling block towards developing clean data sets, hence effective models.

Bias is More Prevalent than We Realize

Bias exists in many business models to minimize risk assessments, and optimize targeting opportunities and while they may produce profitable business results, they have been known to result in unintended consequences that cause individual harm and deepen economic disparities. Insurance companies may use location information or credit score data to issue higher premiums to poorer customers. Banks may approve prospects with lower credit scores, who are already debt-ridden but may be unable to afford the higher lending rates.

There is a heightened caution surrounding bias because the introduction of AI will not only perpetuate existing biases, the result from these learning models may generalize to the point it will deepen the economic and societal divide. Bias presents itself in current algorithms to determine the likelihood of recidivism (the likelihood to re-offend) like COMPAS. The Correction Offender Management Profiling for Alternative Sanctions (COMPAS) was created by a company known as Northpointe. The goal of COMPAS was to assess the risk and prediction of criminality for defendants in pre-trial hearings. The types of questions used in the initial COMPAS research revealed enough human bias that the system perpetuated recommendations that unintentionally treated blacks, who would never go on to re-offend, more harshly by law than white defendants, who would go on to re-offend and were treated more leniently at the time of sentencing. With no public standard available, Northpointe was able to create its own definition of fairness, and develop an algorithm without third-party assessment… until recently. This article confirmed, “A Popular Algorithm Is No Better at Predicting Crimes Than Random People”

If this software is only as accurate as untrained people responding to an online survey, I think the courts should consider that when trying to decide how much weight to put on them in making decisions

Karen stipulated,

While we try to fix existing systems to minimize this bias, it is critical that models train on diverse sets of data to prevent future harms.

Given the potential risks to faulty models pervading business and society, businesses do not have governance mechanisms to police for unfair or immoral decisions that will inadvertently impact the end consumer. This is discussed under ethics.

The Increasing Demand for Privacy

Karen and I came from Yahoo! We worked with strong research and data teams that were able to contextualize behaviour from users across our platform. We continuously studied user behaviour and understood their propensities across our multitude of properties from Music, to Homepage, to Lifestyle, News etc. At that time, there was there were no strict standards or regulation for data use. Privacy was relegated to user passive agreements of the platform’s terms and conditions, similar to today.

The recent Cambridge Analytica/Facebook scandal has brought the personal data privacy front and centre. Frequent data breaches occurring at major credit institutions like Equifax and most recently, Facebook and Google + continue to compound this issue. The issue of ownership, consent and erroneous contextualization makes this a ripe topic as AI continues to iron out its kinks. The European General Data Protection Regulation (GDPR) which has come into effect May 25, 2018, will change the game for organizations, particularly those who collect, store and analyze personal user information. It will change the rules for which business operated under for many years. The unbridled use of personal information has come to a head, as the business will now come to the realization there will be significant limitations on data use and more importantly, ownership.

We are seeing the early effects of this in location-based advertising. This $75 billion industry which is slated to grow by a 5-year 21% CAGR by 2021 continues to be impeded by the oligopoly of Facebook and Google, securing the bulk of revenues. And now, the GDPR raises the stakes to make these ad-tech companies more accountable :

Twitter @hessiejones

Twitter @hessiejones

The stakes are high enough that [advertisers] have to have a very high degree of confidence that what you’re being told is actually in compliance. It seems like there is enough general confusion about what will ultimately constitute a violation that people are taking a broad approach to this until you can get precise about what compliance looks like.

While regulation will eventually cripple revenues, at least for the moment, the mobile and ad platform industries are also facing increasing scrutiny from the very subjects they have been monetizing for many years: the consumer. This, coupled with the examination around established practices, will force the industry to shift their practices in the collection, aggregation, analysis, and sharing of user information.

Operationalizing privacy will take time, significant investment (a topic that needs to be afforded more attention), and a change in mindset that will impact organizational policy, process, and culture.

The Inevitable Coupling of AI & Ethics

The prevailing factor of AI ensures societal benefits, including streamlining processes, increasing convenience, improving products and services, and detecting potential harms through automation. Ceding to the latter means readily measuring inputs/outputs against outcomes in renewed manufacturing processes, service, and assessment solutions, production as well as product quality.

As discussions and news about AI persist, this term, “AI” coupled with “ethics” reveals increasingly grave concerns where AI technology can inflict societal damage that will test human conscience and values.

Tech Cos Confront Ethics of AI

CB Insights: Tech Cos Confront Ethics of AI

Beyond individual privacy concerns, today we are seeing examples of innovation that border on the unconscionable. As stated previously, Rekognition and Face++ are being used in law enforcement and citizen surveillance while the technology is deemed to be faulty. Employees walked out in protest of Google’s decision to provide artificial intelligence to the Defense Department for the analysis of drone footage, with the goal of creating a sophisticated system to surveil cities in a project known as Project Maven. The same tech giant is also building Project Dragonfly for China, a censored search engine that also has the ability to map individual searches to identity.

Decision-makers and regulators will need to instill new process and policies to properly assess how AI technologies are being used, for what purpose and whether there may be unintended fallout in the process. Karen pointed to new questions in determining the use of data in AI algorithms that will need to be considered:

How do we detect sensitive data fields and anonymize them while preserving the important features of a dataset? Can we train on synthetic data as an alternative in the short term? The question we need to ask ourselves when creating the algorithm: What fields do we require to deliver the outcomes we want? In addition, what parameters should we create to define “fairness” in the models, meaning does this treat two individuals differently? And if so, why? How do we continuously monitor for this within our systems?

An AI Winter is a Serendipitous Opportunity to Get AI Ready

AI has come a long way, but still needs more time to mature. In a world of increasing automation and deliberate progress towards increasing cognitive computing capabilities, the impending AI winter has afforded business the necessary time to determine how AI fits into their organization and the problems it wants to solve. The impending casualties of AI need to be addressed in policy, in governance and its impact on individuals and society.

Its impact is far greater in this next industrial revolution as its ubiquity will become more nuanced in our lives. The leading voices of AI from Geoff Hinton, Fei Fei Lee and Andrew Ng, have called on an AI reset because Deep Learning has not yet proven to scale. The promise of AI is not waning, rather the expectations for its real arrival is pushed further out – perhaps 5-10 years. We have time to work these issues on Deep Learning, other AI methods, and the processes to effectively extract value from data. This culmination of business readiness, regulation, education, and research are necessary to bring both business and consumers up to speed and to ensure a regulatory system is in place to properly constrain technology and one that leaves humanity at the helm a little while longer.

About Karen Bennet

Karen Bennet, a Principle Consultant at Arcompany, is an experienced senior engineering leader with more than 25 years in the software development business in both open and closed source solutions. More recently, Karen’s efforts have focused on Artificial Intelligence, enabling enterprise, particularly in the banking and automotive sectors, to experiment with AI/ML. She has extensive leadership engineering positions at Cerebri AI, IBM, Yahoo!, Trapeze and was an early leader, who helped grow Cygnus and Red Hat into sustainable businesses.

 

This post originally appeared on Forbes

 

Share this:

Technology

Society desperately needs an alternative web

Avatar

Published

on

Share this:

I see a society that is crumbling. The rampant technology is simultaneously capsizing industries that were previously the bread and butter of economic growth. The working man and woman have felt its effects as wages stagnate and employment opportunities remain fewer amidst a progressively automated economy. Increasing wage inequality and financial vulnerability have given rise to populism, and the domino effects are spreading.

People are angry. They demand fairness and are threatened by policies and outsiders that may endanger their livelihoods. This has caused a greater cultural and racial divide within and between nations. Technology has enabled this anger to spread, influence and manipulate at a much greater speed than ever before resulting in increasing polarization and a sweeping anxiety epidemic.

Globally, we are much more connected – this, to our detriment. We’ve witnessed both government and business leverage technology to spread disinformation for their gains. While regulators struggle to keep pace with these harms, the tech giants continue, unabated, to wield their influence and power to establish footprints that make both consumers and business increasingly dependent on their platforms and technology stacks. We cannot escape them, nor do we want to. Therein lays the concern…

This recent article, “The World is Choking on Data Pollution” offered a profound distillation of what we are witnessing today:

Progress has not been without a price. Like the factories of 200 years ago, digital advances have given rise to a pollution that is reducing the quality of our lives and the strength of our democracy… We are now face-to-face with a system embedded in every structure of our lives and institutions, shaping our society in ways that deeply impact our basic values.

Tim Berners Lee’s Intent for the World Wide Web has Run Off-Course:

Tim Berners Lee had this Pollyannaish view once upon a time that went like this: What if we could develop a web that was free to use for everyone and that would fuel creativity, connection, knowledge and optimism across the globe? He believed the internet to be a basic human right,

…That means guaranteeing affordable access for all, ensuring internet packets are delivered without commercial or political discrimination, and protecting the privacy and freedom of web users regardless of where they live.

Between 1989 and 1991, Tim Berners Lee led the development of the World Wide Web and unleashed the “language HTML (hypertext markup language) to create the webpages HTTP (used to create web pages), HTTP (HyperText Transfer Protocol), and URLs (Universal Resource Locators).”

The now ubiquitous WWW set a movement which has scaled tremendously, reinventing the way we do business, access and consume information, create connections and perpetuating an unrelenting mindset of innovation and optimism.

What has also transpired is a web of unbridled opportunism and exploitation, uncertainty and disparity. We see increasing pockets of silos and echo chambers fueled by anxiety, misplaced trust and confirmation bias. As the mainstream consumer lays witness to these intentions, we notice a growing marginalization that propels more to unplug from these communities and applications to safeguard their mental health. However, the addiction technology has produced cannot be easily remedied. In the meantime, people continue to suffer.

What has been most distressing are the effects of cyberbullying on our children. In 2016, The National Crime Prevention reported 43% of teens were subjects of cyberbullying, an increase of 11% from a decade prior. Some other numbing statistics:

  • “2017 Pediatric Academic Societies Meeting revealed the number of children admitted to hospitals for attempted suicide or expressing suicidal thoughts doubled between 2008 and 2015”
  • “Javelin Research finds that children who are bullied are 9 times more likely to be the victims of identity fraud as well.”
  • “Data from numerous studies also indicate that social media is now the favored medium for cyberbullies”

Big Tech: Too Big to Fail?

As the web evolved throughout the 90s we witnessed the emergence of hefty players like Google, Yahoo, Microsoft and later Facebook and Amazon. As Chris Dixon asserted:

During the second era of the internet, from the mid 2000s to the present, for-profit tech companies — most notably Google, Apple, Facebook, and Amazon (GAFA) — built software and services that rapidly outpaced the capabilities of open protocols. The explosive growth of smartphones accelerated this trend as mobile apps became the majority of internet use. Eventually users migrated from open services to these more sophisticated, centralized services. Even when users still accessed open protocols like the web, they would typically do so mediated by GAFA software and services.

Today, we appropriately apply a few acronyms to these giants: G-MAFIA (Google, Microsoft, Amazon, Facebook, IBM, Apple), or FAANG (Facebook, Apple, Amazon, Netflix, and Google) and now BAT (Baidu, Alibaba and Tencent). These players have created a progressively centralized internet that has limited competition and has stifled the growth of startups, which are more vulnerable to these tech giants. My discussion with a social network founder (who asked to remain nameless) spoke of one of the large platforms which continuously copied newly released features from their site, and they did so transparently because “they could.” He also witnessed a stall of user engagement and eventual churn. He was unable to compete effectively without the necessary resources and eventually relented, changing his business model and withdrawing to the cryptocurrency community to start anew.

Consider this: These eight players Facebook, Apple, Microsoft, Amazon, Google, Tencent, Baidu, and Alibaba are larger than the “market cap of every listed company in the Eurozone in Emerging Markets and in Japan.” G-MAFIA (excluding IBM) combined posted average returns in 2018 of 45% compared with 19% return among S&P500.  Now add the high degree of consolidation of the tech industry. Together FAANG has acquired 398 companies since 2007. The type of acquisitions has heightened interest from regulators and economists towards anti-trust regulation. Add to this list the highest-ever acquisition in history with IBM’s purchase of Red Hatat a reported $34 billion.

Big tech valuations continue to rise despite the sins illuminated by their technologies. There is this dichotomy that pits what’s good for consumers against what’s good for shareholders. We’ve derived some great experiences from these platforms, but we’ve also seen examples of invisible harms. However unintended, they surface as a result of the business mandate to prioritize user growth and engagement. These performance indicators are what drive employee performance and company objectives. When we think about the impact of big tech, their cloud environments and web hosting servers ensure our emails, our social presence, and our websites are available to everyone on the web. In essence, they control how the internet is run.

Amy Webb, Author of  “The Big Nine: How the Tech Titans and their Thinking Machines could Warp Humanity” refers not only to G-MAFIA but also BAT (the consortium that has led the charge in the highly controversial Social Credit system to create a trust value among its Chinese citizens). She writes:

We stop assuming that the G-MAFIA (Google, Apple, Facebook, IBM, and Amazon) can serve its DC and Wall Street masters equally and that the free markets and our entrepreneurial spirit will produce the best possible outcomes for AI and humanity

These Nine will shape the future of the internet, no doubt. Webb envisions several scenarios where China’s encroaching influence will enable an AGI to control the world much more pervasively than the Social Credit System, and where “democracy will end” in the United States. This is not implausible as we are already seeing signs of BAT’s increased fundingacross gaming, social media, fintech sectors, outpacing the US in investment.

Webb also foresees a future of stifling individual privacy where our personal information is locked in the operating systems of these tech giants, now functioning oligopolies, fueling a “digital caste system,” mimicking a familiar authoritarian system in China.

This future that Webb forecasts is conceivable. Today, beyond Cambridge Analytica and government’s alleged use of Facebook to manipulate voters and seed chaos, the damages, however divergent, are more pervasive and are more connected to one another than we realize. We have seen Amazon’s facial recognition technology used in law enforcement, which has been deemed ineffective and wrought of racial bias.

In the same vein, Buzzfeed reported the use of facial recognition being used in retail systems without the regard for user consent. We believed in Facebook’s initiative to safeguard our security through two-factor authentication, while they used our mobile numbers to target our behavior and weaken our privacy in the process. Both Facebook and Amazon have been known to have experimented with our data to manipulate our emotions. When Tiktok was fined $5.7 million for illegally collecting children’s data, it was only following the lead of its predecessors.

The biggest data breaches of all time have involved some of the largest tech companies like FB, Yahoo! and Uber as well as established corporations like Marriott and Equifax. The downstream effects are yet to be realized as this data is bought and sold on the dark web to the highest bidders. When 23andMe created the Personal Genome Service as an offer to connect people to their roots, it was, instead, exposed as “front for a massive information-gathering operation against an unwitting public.”

This epidemic continues. What is emerging are the hidden intentions behind the algorithms and technology that make it more difficult to trust our peers, our institutions and our government. While employees were up in arms because of Google’s “Dragonfly” censored search engine with China and its Project Maven’s drone surveillance program with DARPA, there exist very few mechanisms to stop these initiatives from taking flight without proper oversight. The tech community argues they are different than Big Pharma or Banking. Regulating them would strangle the internet.

Technology precedes regulation. This new world has created scenarios that are unaddressable under current laws. There is a prevailing legal threat unleashed through the GDPR, however, there are aspects of it that some argue that may indeed stifle innovation. However, it’s a start. In the meantime, we need to progress so systems and governance are in sync, and tech giants are held in check. This is not an easy task.

Who is responsible for the consequences of AI decisions? What mechanisms should be in place to ensure that the industry does not act in ways that go against the public interest? How can practitioners determine whether a system is appropriate for the task and whether it remains appropriate over time? These were the very questions we attempted to answer at the UK/Canada Symposium on Ethics and Artificial Intelligence. There are no clear answers today.

Back to Basics: Can we re-decentralize an increasingly centralized internet?

Here’s a thought! How do we move our increasingly digital world into a place where we all feel safe; where we control our data; where our needs and desires are met without dependence on any one or two institutions to give us that value? The decentralized web is a mindset and a belief in an alternative structure that can address some of the afflictions that have risen from data pollution. This fringe notion is slowly making its way back to mainstream:

A Web designed to resist attempts to centralize its architecture, services, or protocols [so] that no individual, state, or corporation can substantially control its use.

Is it possible to reverse the deterioration we are experiencing today? I spoke with individuals who are working actively within the values of the decentralized web and are building towards this panacea. Andrew Hill and Carson Farmer developed Textile.IO, a digital wallet for photos that are entirely controlled and owned by the user. Textile.io didn’t start out as a decentralized project. As Andrew recalls:

We started this project asking: what was the future of personal data going to look in the future? We didn’t like the answer at all. It seemed like the ubiquity of data with the speed of computing power and increasing complexity of algorithms would lead us to a state that wouldn’t be good for us: easily manipulated, easily tracked and personal lives easily invaded by third parties (government, individuals and companies)

Carson Farmer noted that GMAIL is fundamentally a better user experience because individuals didn’t need to run their own protocols or set up their own servers. This “natural” progression” to centralized technologies has served the Big Nine well.

Since then, it’s been this runaway because of the capitalist value behind data. They are building business models behind it and it will not go away overnight. By putting our blind trust into a handful of corporations who collect our data, we’ve created a run-away effect (some folks call it ‘data network effects’) where those companies now create value from our data, that is orders of magnitude greater than any new entrant into the market is capable of. This means that the ‘greatest’ innovation around our digital data is coming from only a handful of large companies.

However, people, en-masse, don’t understand this imminent threat. Few really understand the implications of cybersecurity breaches, nor the impact to individual welfare or safety from the data they willingly provide these networks. How much of this needs mainstream to care about it to achieve the scalability it requires? Hill argues that few will abandon technologies unless their values are subdued by risk. Hill explained our “signaled intentions actually differ from our intended behaviors.” For example, many would support legislation to reduced speed limits in certain areas to minimize deaths from auto accidents. However, engineering this feature into self-driving cars so they are unable to go faster, would be far more objectionable because it impedes us.

Adoption of a decentralized web cannot play by the old rules. New experiences and interactions that are outside of current norms needs to appeal to individual values, that enable trust and ease of adoption. Pulling users away from convention is not an easy task. However, emerging organizations are starting to build bridges into the old technology in an effort to re-decentralizeMatrix.org has created an open standard for decentralized communications. The Dat Project, largely funded mainly by donations provides a peer to peer file sharing protocol to create a more human-centered internet, without the risk of data being sold. For Textile.io their version of Instagram allows users to add a photo to their mobile application, which exists on your phone, with a privately encrypted copy existing on an IPFS (“a peer-to-peer protocol for sharing hypermedia in distributed file system”) node off your phone. No one sees the encrypted photo version unless you share the private keys to that photo. Textile has no view into the data, nor an intention of processing or keeping it. Handshake.org is a “permissionless and decentralized naming protocol to replace the DNS root file and servers with a public commons”, uncensorable and free of any gatekeeper. The Internet Archive, started by Brewster Kale, is a non-profit library that has cataloged over 400 billion web pages in the last 22 years, also digitizing all-things analog (books, music, movies), with the attempt to save web history and knowledge with free access to anyone.

Wendy Hanamura, Director of the Internet Archive is also the Founder of DWeb, a summit which started in 2016 bringing together builders and non-builders within the 4 levers of change: 1) laws 2) markets 3) norms and values 4) technology to advocate a better web. The intention was to do a moonshot for the internet and create “A web that’s locked open for good.” Why now? Wendy declared,

In the last few years we have woken up to see that the web is failing us. We turn to our screens for information we are getting, instead, deception in fake news, non reliable information, missing data. A lot of us in the sector feel we could do better. Technology is one path to doing better.

The prevailing vision of the Dweb:

A goal in creating a Decentralized Web is to reduce or eliminate such centralized points of control. That way, if any player drops out, the system still works. Such a system could better help protect user privacy, ensure reliable access, and even make it possible for users to buy and sell directly, without having to go through websites that now serve as middlemen, and collect user data in the process.

While it’s still early day, for at least a decade many players have chosen to become part of this movement to fix the issues that increasing centralization has created. From Diaspora to Bit Torrent, a growing list of technologies continue to develop alternatives for the DWeb: for storage, social nets, communication and collaboration apps, database, cryptocurrencies, etc. Carson sees the Dweb evolving and feels the time is ripe for this opportunity:

Decentralization gives us a new way forward: decentralized data storage, encryption based privacy, and P2P networks give us the tools to imagine a world where individuals own and control their personal data. In that future, all technologies can build and contribute to the same data network effect. That is exciting because it means we can create a world with explosive innovation and value generation from our data, as opposed to one limited by the production capacity and imagination of those few companies…

Can the decentralized web fix this? In a world where trust is fleeting, this may be a significant pathway forward but it’s still early day. The DWeb is reawakening. The emergence of its players sees tremendous promise however, the experiences will need to get better. Many things must work in tandem. The public needs to be more informed of the impact on their individual rights and welfare. Business needs to change its mindset. I was reminded by Dr. George Tomko, Expert in Residence at the University of Toronto, that if business can become more human, to be more compassionate

…and have the ability to feel a person’s pain or discomfort and to care enough by collaborating with others in alleviating her pain or discomfort… what emerges is a society of greater empathy, and a culture that yields more success

Regulation has to also be in lock-step with technology but it must be informed and well thought out to encourage competition and minimize costs to the consumer. More importantly, we must encourage more solutions to bring more data control to the user to give him/her the experiences they want out of the web, without fear of repercussions. This was the original promise of the internet.

This originally appeared on Forbes.

Share this:
Continue Reading

Healthcare

Canadian startup Deep Genomics uses AI to speed up drug discovery

Avatar

Published

on

genetics
Share this:

One of the biggest challenges pharmaceutical companies face is with the time taken to discover new drugs, develop them and get them to market. This lengthy process is punctuated with false starts. Startup Deep Genomics uses AI to accelerate the process.

Canadian startup Deep Genomics has been using artificial intelligence as a mechanism to speed up the drug discovery process, combining digital simulation technology with biological science and automation. The company has built a platform which uses machine learning to delve into the molecular basis of genetic diseases. The platform can analyze potential candidate drugs and identify those which appear most promising for further development by scientists.

The drug development process is dependent upon many factors, such as those relating to combining molecules (noting the interactions between hundreds of biological entities) and with the assessment of biomedical data. The data review required at these stages is highly complex. For these reasons, many researchers are seeking algorithms to help to extract data for analysis.

According to MaRS, Deep Genomics is addressing the time consuming element involved in the initial stages of drug discovery. The artificial intelligence system that the company has designed is able to process 69 billion molecules, comparing each one against around one million cellular processes. This type of analysis would have taken a conventional computer (or a team of humans) many years to run the necessary computations.

Within a few months, Deep Genomics AI has narrowed down the billions of combination to shortlist of 1,000 potential drugs. This process is not only faster, it narrows down the number of experiments that would need to be run, saving on laboratory tests and ensuring that only those drugs with a high chance of success are progressed to the clinical trial stage.

This type of system goes some way to addressing the lengthy typical time to market, which stands at around 14 years for a candidate drug; as well as reducing the costs for drug development, which run into the billions of dollars per drug.

Share this:
Continue Reading

Healthcare

Health service partners with Alexa to provide medical support

Avatar

Published

on

Alexa
Share this:

The U.K. National Health Service (NHS) is to partner with Amazon’s Alexa in order to provide health information. This is being piloted as an alternative to medical advice helplines and to reduce the number of medical appointments.

While the U.K. NHS is much admired around the world as a free-at-the-point-of-use healthcare system, health officials are always keen to find ways to reduce the strain on the systems, especially relating to medical visits where the process of booking appointments and waiting times for sessions with doctors can be lengthy. The average time to obtain a non-medical emergency appointment with a general medical practitioner is averaging around two weeks.

Although a non-emergency medical helpline is active (accessed by dialling 111), plus an online system, health officials are keen to explore other ways by which the U.K. population can access medical services. For this reason, NHS England is partnering with Amazon.

The use of Alexa voice technology not only offers an alternative service for digitally-savvy patients, it provides a potentially easier route for elderly and visually impaired citizens, as well as those who cannot access the Internet through a keyboard, to gain access to health information. This fits in with a new initiative from the U.K. Government called NHSX, which is about the NHS Long Term Plan intended to make more NHS services available digitally.

As PharmaPhorum reports, Alexa can now answer questions such as “Alexa, how do I treat a migraine?” and “Alexa, what are the symptoms of flu?”

Outside of the U.K., Amazon is working with several healthcare providers, including digital health coaching companies, in order to launch six new Alexa healthcare ‘skills’. According to Rachel Jiang, head of Alexa Health & Wellness: “Every day developers are inventing with voice to build helpful and convenient experiences for their customers. These new skills are designed to help customers manage a variety of healthcare needs at home simply using voice – whether it’s booking a medical appointment, accessing hospital post-discharge instructions, checking on the status of a prescription delivery and more.”

Share this:
Continue Reading

Featured