It’s the news that has taken Canada by storm of late, on Twitter, in the headlines, and in today’s parliamentary debate: Statistics Canada, Canada’s agency which issues statistical research on the state of Canada, its population, the economy and culture, unwittingly walked into the spotlight when Global News revealed the agency had asked TransUnion, a credit bureau that amasses credit information for many financial institutions to provide financial transactions and credit histories on approximately 500,000 Canadians, without their individual prior consent. The Liberal government has endorsed this move.
During the parliamentary debate, Conservative opposition Gérard Deltell declared,
If the state has no business in people’s bedrooms, the state has no business in their bank accounts either. There is no place for this kind of intrusion in Canada. Why are the Liberals defending the [Statistics Canada] indefensible?
The data being demanded, according to Global News, consists of private information including name, address, date of birth, SIN, account balances, debit and credit transactions, mortgage payments, e-transfers, overdue amounts, and biggest debts on 15 years worth of data. Equifax, the other credit reporting agency that supports financial institutions in Canada has not been asked to provide data.
Francois-Philippe Champagne, Minister of Infrastructure and Communities was vague in his response. While he affirms StatsCanada’s upstanding practices in anonymizing and protecting personal data, he also admitted proper consent was not received,
StatsCan is going above the law and is asking banks to notify clients of this use. Stats Canada is on their side… We know data is a good place to start to make policy decisions in this country, and we will treat the information in accordance with the law. They can trust Statistics Canada to do the right thing.
Statistics Canada and the Liberal government failed to disclose the explicit use of this information, however,
By law, the agency can ask for any information it wants from any source.
I posed this question to former 3-term Privacy Commissioner, Ann Cavoukian, who currently leads the Privacy by Design Practice at Ryerson University, Toronto:
What’s troubling is that while the opposition cried foul, lashing out accusations of authoritarianism and surveillance, the latter outcome is not implausible.
- if the collection and use are clearly in the interests of the individual and consent cannot be obtained in a timely manner;
- if the collection and use with consent would compromise the availability or the accuracy of the information and the collection is reasonable for purposes related to investigating a breach of an agreement or a contravention of the laws of Canada or a province;
- if disclosure is required to comply with a subpoena, warrant, court order, or rules of the court relating to the production of records;
- if the disclosure is made to another organization and is reasonable for the purposes of investigating a breach of an agreement or a contravention of the laws of Canada or a province that has been, is being or is about to be committed and it is reasonable to expect that disclosure with the knowledge or consent of the individual would compromise the investigation;
- if the disclosure is made to another organization and is reasonable for the purposes of detecting or suppressing fraud or of preventing fraud that is likely to be committed and it is reasonable to expect that the disclosure with the knowledge or consent of the individual would compromise the ability to prevent, detect or suppress the fraud;
- if required by law.
For Statistics Canada, its broad legal reach is enough for the agency to circumvent explicit disclosure of data use and permission. This alone sets a dangerous precedent that wrestles with current European GDPR mandates, which will be referenced in the updated PIPEDA Act, at a time yet to be determined.
However, this privilege will not make StatsCanada immune to data breaches, but in fact, will make it a stronger target for data hackers. According to the Breach Level Index, since 2013 there have been 13+ billion records lost or stolen, with an average of 6.3+ million lost on a daily basis. The increasing centralization of data makes this more likely. For Statistics Canada, which has been collecting tax filings, census data, location, household, demographic, usage, health and economic data, it is increasingly amassing its data online. According to National Newswatch, the dwindling survey completions and costly census programs have necessitated a move to compile information from other organizations such as financial institutions, which come at more reasonable costs and better data quality.
If this is the catalyst to aggregate compiled information, with the goal of record linking, it will unearth significant privacy alarms in the process. For StatsCanada, which has received significant government support because of the critical information it lends to policy decisions, there are looming dangers of being the purveyor of every Canadian’s private information, beyond data breach vulnerabilities.
Anonymized Data Doesn’t Mean Anonymous Forever
I spoke to Alejandro Saucedo, the Chief Scientist at The Institute for Ethical AI & Machine Learning, a UK-based research center that develops industry standards and frameworks for responsible machine learning development and asked him to weigh in on this issue:
Canadians are rightly worried. It concerns me that StatsCanada is suggesting that just discarding names and addresses would be enough to anonymize the data. Not to point out the obvious, but data re-identification is actually a big problem. There have been countless cases where anonymized datasets have been reverse engineered, let alone datasets as rich as this one.
Re-identification is used to reverse-engineer the anonymity data state and uses alternative data sources to link information to identity. Using publicly available data, easily found in today’s BigData environment, coupled with the speed of advanced algorithms, Saucedo points to successful attempts of re-identification: reverse engineering credit card data, or when this engineer was able to create a complete NYC taxis data dump of 173 million trips and fare logs by decoding the cryptographically secure hashing function that anonymized the medallion and taxi number.
Ethical hacks are not new to banking or any company that collects and manages significant data volumes. These are intentional hacks propagated internally and intentionally by corporations against their existing infrastructure to ensure mitigation of vulnerabilities on-premise and online. This practice ensures the organization is up to par with the latest methods for encryption and security as well as current breach mechanisms. As Saucedo points out:
Even if StatsCanada didn’t get access to people’s names (e.g. requested the data previously aggregated), it concerns me that there is no mention of more advanced methods for anonymization. Differential Privacy, for example, is a technique that adds statistical noise to the entire dataset, protecting users whilst still allowing for high-level analysis. Some tech companies have been exploring different techniques to improve privacy – governments should have a much more active role in this space.
Both Apple and Uber are incorporating Differential Privacy. The goal is to mine and analyze usage patterns without compromising individual privacy. Since the behavioral patterns are more meaningful to the analysis, a “mathematical noise” is added to conceal identity. This is important as more data is collected to establish these patterns. This is not a perfect methodology but for Apple and Uber, they are making momentous strides in ensuring individual privacy is the backbone of their data collection practices
Legislation Needs to be Synchronous with Technology
GDPR is nascent. Its laws will evolve as technology surfaces other invasive harms. Government is lagging behind technology. Any legislation that does not enforce fines for significant breaches in the case of Google Plus, Facebook or Equifax will certainly ensure business and government maintain the status quo.
Challenges of communicating the new order of data ownership will continue to be an uphill battle in the foreseeable future. Systems, standards and significant investment into transforming policy and structure will take time. For Statistics Canada and the Canadian government, creating frameworks that give individuals unequivocal control of their data require education, training, and widespread awareness. Saucedo concedes,
A lot of great thinkers are pushing for this, but for this to work we need the legal and technological infrastructure to support it. Given the conflict of interest that the private sector often may face in this area, this is something that the public sector will have to push. I do have to give huge credit to the European Union for taking the first step with GDPR – although far from perfect, it is still a step in the right direction for privacy protection.
Petition to the House of CommonsWhereas:
- The government plans to allow Statistics Canada to gather transactional level personal banking information of 500,000 Canadians without their knowledge or consent;
- Canadians’ personal financial and banking information belongs to them, not to the government;
- Canadians have a right to privacy and to know and consent to when their financial and banking information is being accessed and for what purpose;
- Media reports highlight that this banking information is being collected for the purposes of developing “a new institutional personal information bank”; and
- This is a gross intrusion into Canadians’ personal and private lives.
This post first appeared on Forbes.
Hessie Jones is the Founder of ArCompany advocating AI readiness, education and the ethical distribution of AI. She is also Cofounder of Salsa AI, distributing AI to the masses. As a seasoned digital strategist, author, tech geek and data junkie, she has spent the last 18 years on the internet at Yahoo!, Aegis Media, CIBC, and Citi, as well as tech startups including Cerebri, OverlayTV and Jugnoo. Hessie saw things change rapidly when search and social started to change the game for advertising and decided to figure out the way new market dynamics would change corporate environments forever: in process, in culture and in mindset. She launched her own business, ArCompany in social intelligence, and now, AI readiness. Through the weekly think tank discussions her team curated, she surfaced the generational divide in this changing technology landscape across a multitude of topics. Hessie is also a regular contributor to Towards Data Science on Medium and Cognitive World publications.
This article solely represents my views and in no way reflects those of DXJournal. Please feel free to contact me firstname.lastname@example.org
Tesla wants its factory workers to wear futuristic augmented reality glasses on the assembly line
- Tesla patent filings reveal plans for augmented reality glasses to assist with manufacturing.
- Factory employees has previously used Google Glass in its factory as recently as 2016.
To cut down on the number of fit and finish issues — like the “significant inconsistencies” found by UBS— Tesla employees on the assembly line could soon use augmented reality glasses similar to Google Glass to help with car production, according to new patent filings.
Last week, Tesla filed two augmented reality patents that outline a futuristic vision for the relationship between humans and robots when it comes to manufacturing. The “smart glasses” would double as safety glasses, and would help workers identify places for joints, spot welds, and more, the filings say.
Here’s how it works:
And here’s the specific technical jargon outlining the invention (emphasis ours):
The AR device captures a live view of an object of interest, for example, a view of one or more automotive parts. The AR device determines the location of the device as well as the location and type of the object of interest. For example, the AR device identifies that the object of interest is a right hand front shock tower of a vehicle. The AR device then overlays data corresponding to features of the object of interest, such as mechanical joints, interfaces with other parts, thickness of e-coating, etc. on top of the view of the object of interest. Examples of the joint features include spot welds, self-pierced rivets, laser welds, structural adhesive, and sealers, among others. As the user moves around the object, the view of the object from the perspective of the AR device and the overlaid data of the detected features adjust accordingly.
As Electrek points out, Tesla has previously been employing Google Glass Enterprise as early as 2016, though it’s not clear how long it was in use.
Tesla has a tricky relationship with robotics in its factory. In April, CEO Elon Musk admitted its Fremont, California factory had relied too heavily on automated processes. Those comments, to CBS This Morning, came after criticism from a Bernstein analyst who said “We believe Tesla has been too ambitious with automation on the Model 3 line.”
Still, the company seems to be hoping for a more harmonious relationship between human and machine this time around.
“Applying computer vision and augmented reality tools to the manufacturing process can significantly increase the speed and efficiency related to manufacturing and in particular to the manufacturing of automobile parts and vehicles,” the patent application reads.
This article was originally published on Business Insider. Copyright 2018.
The great buy-in: How to learn to love AI at work
The conversation around AI is changing — and the emphasis on the augmentation of current workers, rather than the wholesale replacement of segments of the workforce, is a significant (and many would argue, necessary) shift. However, anxiety and fear are still tough contenders for those trying to usher in a new era of AI-assisted workplaces.
Zoom.ai is a chat-based productivity tool that helps employees automate everyday tasks including searching for files, scheduling meetings, and generating documents. In an interview with DX Journal, Sriubiskis said public opposition to AI remains a major stumbling block not just for technology companies, but for businesses around the world.
As the language around AI changes, it becomes obvious that people want change from the technology, but remain hesitant about the disruptive effect AI-based automation could bring to their industries.
As highlighted in a recent Forbes article, knowledge-based workers with tenure, who have developed their skill-set over a period of time, are acting along the lines of basic psychology when it comes to fear surrounding automation. Unfortunately, that push-back can severely stunt the success of digital transformation projects designed to improve the lives of workers throughout the company, not replace them.
“A lot of people are afraid that AI’s going to take their job away,” said Sriubiskis. “That’s because that’s the narrative that we’ve seen for so long. It’s now about shifting the narrative to: AI’s going to make your job better and give you more time to focus more on the things that you’ve been hired to do because you’re good at doing them. There are tons of websites online talking about whether your job’s going to be taken away by AI, but they never really talk about how people’s jobs are going to be improved and what things they won’t have to do anymore so they can focus on the things that actually matter.”
Buy-in requires tangible results
This general AI anxiety can seem like a big obstacle to companies looking to adopt AI — but there are important steps companies can take to ensure their AI on-boarding is done with greater understanding and effectiveness.
As startups and businesses look to break through the AI fear-mongering, they have to demonstrate measurable benefits to employees, showing how AI can make work easier. By building an understanding of how AI affects employees, showing them how it benefits them, and using that information to inspire confidence in the project, businesses can work to create a higher level of employee buy-in.
One of the simplest examples of how to demonstrate this kind of benefit comes from Zoom.ai’s digital assistant for the workplace. An immediately beneficial way AI can augment knowledge-based workers is by giving them back their time.According to McKinsey & Company research cited by Zoom.ai, knowledge workers spend 19 percent of their time — one day a week — searching for and gathering information, sequestered by app or database silos. By showing how the employee experience can be improved with the use of automated meeting scheduling or document retrieval, you generate employee buy-in, said Sriubiskis.
“For us, the greatest advantage is giving employees some of their time back, so they can be more effective in the role that they were hired to do. So if there’s a knowledge-based worker, and they’re an engineer for example, they shouldn’t be spending time booking meetings, generating documents, finding information or submitting IT tickets. Their time would be better spent putting it towards their engineering work. For an enterprise company, based on our cases, we estimate that we can give employees at least 10 hours back a month. That allows them to be more productive, increase their collaboration and their creativity, and the overall employee experience improves.”
Full comprehension of a problem leads to better implementation
Another way to ensure a greater level of employee confidence is to understand the core problem that AI could be used to solve. You can’t just throw AI at an issue, said Sriubiskis. The application of the AI solution has to make sense in the context of an identified problem.
“When a lot of companies talk about their current endeavours, they’re saying, ‘we’re exploring AI to do this.’ But they’re not actually understanding a core problem that their employees are facing. If you just try to throw a new technology at a problem you don’t fully understand, you’re not going to be as successful as you want. You might be disappointed in that solution, and people are going to be frustrated that they wasted time without seeing any results.”
This deliberate effort to understand a key problem before implementing a solution can drive to better outcomes. That’s why Zoom.ai has incorporated this kind of core observation into its process of on-boarding clients or approaching a new project.
“Before we do a proof-of-concept or a pilot now,” said Sriubiskis, “we require companies to do an interview with some of our product and our UI/UX team. That way, we can understand how they do things currently, but also so we can provide a quantitative metric. Qualitative is nice, but people also want to see the results, and make sure their work was worth it. We make sure to interview a whole bunch of users, clearly understand the problem, and make sure what we’re doing isn’t a barrier to what they’re actually trying to solve, it’s going to help it and help it more over time.”
These approaches are all about making the team of employees feel like an AI solution is working for them, leading to greater effectiveness of AI implementation to augment the workforce. It remains key, said Sriubiskis, to make sure employees can see the tangible benefits of the technology. Zoom.ai makes that employee experience a core part of their on-boarding process: “We report back to our users and tell them how many hours they’ve saved. So they see how the actual improvements are seen by them, not just by management or the company as a whole.”
The future is filled with AI. It’s just a question of making sure it helps, not hurts, human capital — and that a positive transition to AI tools prioritizes the employee experience along the way.
Navigating the AI Hype
Welcome to Navigating the AI Hype. This will be a timely article that curates events in AI to tabulate AI’s journey as this unprecedented phenomenon makes its way into our lives: The Good, the Bad and the Ugly. We will acknowledge successes in AI as well as those that still require further progress. We will also highlight areas where human conscience will need to dictate policy and regulation as ethical standards will be built in lockstep with technology as it evolves. Finally, we will highlight references and resources for anyone wanting to dive in further into Artificial Intelligence. Enjoy!
DeepMind AlphaFold Delivers “Unprecedented Progress” on Protein Folding
“Proteins are essential to life. Predicting their 3D structure is a major unsolved challenge in biology and could impact disease understanding and drug discovery. I’m excited to announce that we have won the CASP13 protein folding competition!”
Facebook And MIT Researchers Want To Use AI To Create Addresses For The Billions Of People Who Don’t Have One
“Artificial intelligence will revolutionize how we live, creating both incredible opportunity for benefits, as well as some disruption that will be important to manage,”
Tech giants offer empty apologies because users can’t quit
“Sorry means nothing since so does We’re deleting.”.
DuckDuckGo Says Google’s Filter Bubble Is Real, and It Can Prove It
A study shows incognito mode does not mean anonymous
Microsoft President: We’ll Give Pentagon ‘All the Technology We Create’
“For us, we’ve been clear: we are gonna provide the US military with access to the best technology — to all the technology — we create. Full stop. We just said that flat out.”
LinkedIn used 18M non-member emails to target Facebook ads. Were you a victim?
A Data Protection Commissioner investigation found that LinkedIn violated data protection policies shortly before onset of GDPR
Marriott hotels: data of 500m guests may have been exposed
“This indicates that as far as security monitoring and being able to respond in a timely and adequate fashion, Marriott had severe challenges being able to live up to its mission statement of keeping customer data safe.”
Quora data breach FAQ: What 100 million hacked users need to know
“On Friday [November 30] we discovered that some user data was compromised by a third party who gained unauthorized access to one of our systems.”
Emails of top NRCC officials stolen in major 2018 hack
“The NRCC can confirm that it was the victim of a cyber intrusion by an unknown entity. The cybersecurity of the Committee’s data is paramount, and upon learning of the intrusion, the NRCC immediately launched an internal investigation and notified the FBI, which is now investigating the matter,”
AI courses and resources
Technology2 months ago
AI will drive the next wave of sales and marketing innovation
Investment2 months ago
Setting Canada up for long-term success is about talent and collaboration
Technology2 months ago
AWS chief architect says AI is boosting human intelligence
Events1 month ago
mesh to host digital transformation meetup Nov. 21 in Toronto
Technology2 months ago
The future of work continues to be rewritten