Connect with us

Technology

Canadians up in arms: Privacy without consent and the dangerous precedent

Published

on

Canada data concept, DepositPhotos
Share this:

It’s the news that has taken Canada by storm of late, on Twitter, in the headlines, and in today’s parliamentary debate: Statistics Canada, Canada’s agency which issues statistical research on the state of Canada, its population, the economy and culture, unwittingly walked into the spotlight when Global News revealed the agency had asked TransUnion, a credit bureau that amasses credit information for many financial institutions to provide financial transactions and credit histories on approximately 500,000 Canadians, without their individual prior consent. The Liberal government has endorsed this move.

During the parliamentary debate, Conservative opposition Gérard Deltell declared,

If the state has no business in people’s bedrooms, the state has no business in their bank accounts either. There is no place for this kind of intrusion in Canada. Why are the Liberals defending the [Statistics Canada] indefensible? 

The data being demanded, according to Global News, consists of private information including name, address, date of birth, SIN, account balances, debit and credit transactions, mortgage payments, e-transfers, overdue amounts, and biggest debts on 15 years worth of data. Equifax, the other credit reporting agency that supports financial institutions in Canada has not been asked to provide data.

Francois-Philippe Champagne, Minister of Infrastructure and Communities was vague in his response. While he affirms StatsCanada’s upstanding practices in anonymizing and protecting personal data, he also admitted proper consent was not received,

StatsCan is going above the law and is asking banks to notify clients of this use. Stats Canada is on their side… We know data is a good place to start to make policy decisions in this country, and we will treat the information in accordance with the law. They can trust Statistics Canada to do the right thing.

Statistics Canada and the Liberal government failed to disclose the explicit use of this information, however,

By law, the agency can ask for any information it wants from any source.

I posed this question to former 3-term Privacy Commissioner, Ann Cavoukian, who currently leads the Privacy by Design Practice at Ryerson University, Toronto:

Ann Cavoukian Twitter

Ann Cavoukian Twitter

What’s troubling is that while the opposition cried foul, lashing out accusations of authoritarianism and surveillance, the latter outcome is not implausible.

According to Personal Information Protection and Electronic Documents Act (PIPEDA) Guidelines to Obtain Meaningful Consent, these are the main exceptions

  • if the collection and use are clearly in the interests of the individual and consent cannot be obtained in a timely manner;
  • if the collection and use with consent would compromise the availability or the accuracy of the information and the collection is reasonable for purposes related to investigating a breach of an agreement or a contravention of the laws of Canada or a province;
  • if disclosure is required to comply with a subpoena, warrant, court order, or rules of the court relating to the production of records;
  • if the disclosure is made to another organization and is reasonable for the purposes of investigating a breach of an agreement or a contravention of the laws of Canada or a province that has been, is being or is about to be committed and it is reasonable to expect that disclosure with the knowledge or consent of the individual would compromise the investigation;
  • if the disclosure is made to another organization and is reasonable for the purposes of detecting or suppressing fraud or of preventing fraud that is likely to be committed and it is reasonable to expect that the disclosure with the knowledge or consent of the individual would compromise the ability to prevent, detect or suppress the fraud;
  • if required by law.

For Statistics Canada, its broad legal reach is enough for the agency to circumvent explicit disclosure of data use and permission. This alone sets a dangerous precedent that wrestles with current European GDPR mandates, which will be referenced in the updated PIPEDA Act, at a time yet to be determined.

However, this privilege will not make StatsCanada immune to data breaches, but in fact, will make it a stronger target for data hackers. According to the Breach Level Index, since 2013 there have been 13+ billion records lost or stolen, with an average of 6.3+ million lost on a daily basis. The increasing centralization of data makes this more likely. For Statistics Canada, which has been collecting tax filings, census data, location, household, demographic, usage, health and economic data, it is increasingly amassing its data online. According to National Newswatch, the dwindling survey completions and costly census programs have necessitated a move to compile information from other organizations such as financial institutions, which come at more reasonable costs and better data quality.

If this is the catalyst to aggregate compiled information, with the goal of record linking, it will unearth significant privacy alarms in the process. For StatsCanada, which has received significant government support because of the critical information it lends to policy decisions, there are looming dangers of being the purveyor of every Canadian’s private information, beyond data breach vulnerabilities.

Anonymized Data Doesn’t Mean Anonymous Forever

I spoke to Alejandro Saucedo, the Chief Scientist at The Institute for Ethical AI & Machine Learning, a UK-based research center that develops industry standards and frameworks for responsible machine learning development and asked him to weigh in on this issue:

Canadians are rightly worried. It concerns me that StatsCanada is suggesting that just discarding names and addresses would be enough to anonymize the data. Not to point out the obvious, but data re-identification is actually a big problem. There have been countless cases where anonymized datasets have been reverse engineered, let alone datasets as rich as this one. 

Re-identification is used to reverse-engineer the anonymity data state and uses alternative data sources to link information to identity. Using publicly available data, easily found in today’s BigData environment, coupled with the speed of advanced algorithms, Saucedo points to successful attempts of re-identification: reverse engineering credit card data, or when this engineer was able to create a complete NYC taxis data dump of 173 million trips and fare logs by decoding the cryptographically secure hashing function that anonymized the medallion and taxi number.

Ethical hacks are not new to banking or any company that collects and manages significant data volumes. These are intentional hacks propagated internally and intentionally by corporations against their existing infrastructure to ensure mitigation of vulnerabilities on-premise and online. This practice ensures the organization is up to par with the latest methods for encryption and security as well as current breach mechanisms. As Saucedo points out:

Even if StatsCanada didn’t get access to people’s names (e.g. requested the data previously aggregated), it concerns me that there is no mention of more advanced methods for anonymization. Differential Privacy, for example, is a technique that adds statistical noise to the entire dataset, protecting users whilst still allowing for high-level analysis. Some tech companies have been exploring different techniques to improve privacy – governments should have a much more active role in this space.

Both Apple and Uber are incorporating Differential Privacy. The goal is to mine and analyze usage patterns without compromising individual privacy. Since the behavioral patterns are more meaningful to the analysis, a “mathematical noise” is added to conceal identity. This is important as more data is collected to establish these patterns. This is not a perfect methodology but for Apple and Uber, they are making momentous strides in ensuring individual privacy is the backbone of their data collection practices

Legislation Needs to be Synchronous with Technology

GDPR is nascent. Its laws will evolve as technology surfaces other invasive harms. Government is lagging behind technology. Any legislation that does not enforce fines for significant breaches in the case of Google Plus, Facebook or Equifax will certainly ensure business and government maintain the status quo.

Challenges of communicating the new order of data ownership will continue to be an uphill battle in the foreseeable future. Systems, standards and significant investment into transforming policy and structure will take time. For Statistics Canada and the Canadian government, creating frameworks that give individuals unequivocal control of their data require education, training, and widespread awareness. Saucedo concedes,

 A lot of great thinkers are pushing for this, but for this to work we need the legal and technological infrastructure to support it. Given the conflict of interest that the private sector often may face in this area, this is something that the public sector will have to push. I do have to give huge credit to the European Union for taking the first step with GDPR – although far from perfect, it is still a step in the right direction for privacy protection.

 (Update) As of Friday, November 1, 2018, this Petition E-192 (Privacy and Data Protection) was put forward to the House of Commons calling for the revocation of this initiative. 21,000 signatures have been collected to date. Canadians interested in adding their names to this petition can do so.
Petition to the House of Commons
Whereas:
  • The government plans to allow Statistics Canada to gather transactional level personal banking information of 500,000 Canadians without their knowledge or consent;
  • Canadians’ personal financial and banking information belongs to them, not to the government;
  • Canadians have a right to privacy and to know and consent to when their financial and banking information is being accessed and for what purpose;
  • Media reports highlight that this banking information is being collected for the purposes of developing “a new institutional personal information bank”; and
  • This is a gross intrusion into Canadians’ personal and private lives.
We, the undersigned, Citizens and Residents of Canada, call upon the Government of Canada to immediately cancel this initiative which amounts of a gross invasion of privacy and ensure such requests for personal data never happen again.

This post first appeared on Forbes.

Share this:

Events

Intel embraces DX at Data-Centric Innovation Day

Published

on

Share this:

Intel’s recent Data-Centric Innovation Day in San Francisco showed how the company is putting digital transformation at the forefront of its business strategy, to build a bridge from its former position as the big name in PC CPUs, toward a more agile future. In a competitive world of business technology startups and scaleups, Intel is putting its DX foot forward, and showing how the company’s own innovations can help its global customers to embrace the wins that comes with digital transformation.

While the event was a product launch for all intents and purposes, there was a bigger story going on at Data-Centric Innovation Day: the positioning of Intel as a data-centric enterprise and the company’s emphasis on collaboration with its customers around the world as they undertake digital transformation.

At the event’s outset, Intel CEO Robert Swan predicted that the company’s data-centric total addressable market will be 200 billion by 2022. As a continually growing number of organizations move to the cloud, and C-suites continue to look to AI and analytics to develop their competitive advantage, this kind of market growth for the IT giant seems reasonable.

At the core of Intel’s data-driven shift is the customer experience. As Swan stated at the event, Intel is looking to become ‘customer-obsessed’ through the company’s new focus on data. While the role of a processor or a new hardware product within enterprise organizations has not radically shifted — it remains just one piece within the larger technology structures powering digital transformation — Intel’s attitude around their hardware and software offerings, and how they play into the customer’s overall business technology experience, has certainly taken a big leap forward.

The 2nd-Gen Intel Xeon Scalable Processors are all about data and digital transformation.

In a press release for the event, Navin Shenoy, Intel executive vice president and general manager of the Data Center Group, noted that the new technology was all about putting data first:

“Today’s announcements reflect Intel’s new data-centric strategy. The portfolio of products announced today underscores our unmatched ability to move, store and process data across the most demanding workloads from the data center to the edge. Our 2nd-Generation Xeon Scalable processor with built-in AI acceleration and support for the revolutionary Intel Optane DC persistent memory will unleash the next wave of growth for our customers.”

Intel unveiled a new range of products, including the next generation of Xeon Scalable Processors. The new Xeon line was designed with DX tasks in mind, and the processors look to aid Intel clients with AI processes, cloud and edge computing and with running rapidly growing workloads. The new processors feature DL Boost, a unique inference acceleration offering designed specifically for AI-heavy processes.

Intel

Lisa Davis (left), Intel’s VP of Data Center Group and General Manager of Digital Transformation and Scale Solutions, unveils new security solutions.

The technology giant also emphasized the security enhancements of the new range. VP of Digital Transformation at Intel, Lisa Davis, announced during the event that Intel has partnered with Lockheed Martin to create hardened, full-stack security solutions for CIOs and CESOs. Processing and moving more data than ever requires ever-evolving security, and Intel made a point of emphasizing their dedicating to this element of their new product line.

DX at the heart of Intel’s announcements

For an established tech company like Intel to take on data in such a massive way should be no surprise to digital transformation diehards. But for those still hesitant to take on data management as a bigger part of their organization, Intel’s focus on moving, storing and processing every bit of client data should act as a wakeup call for those still holding out when it comes to digital transformation efforts.

The shape that Intel’s technology is taking, as innovations like DL Boost and the cloud-centric nature of the company’s new security offerings show, is all about meeting the digital transformation needs of customers around the world.

“You can’t digitally transform as an organization if you’re focused on aging IT practices,” said Intel Canada’s Phil Vokins during an interview on the day of the event. “I think the one thing we’ve seen today which we should all be excited about is the range of capabilities and performance that we’re enabling, which was unthinkable even a couple of years ago. It’s not just about the performance of the processor, but look at the memory we can have per socket now. This will really enable businesses to take advantage of the information they have.”

Collaboration with partners and clients key

This focus on a holistic approach to data is not something Intel is doing on its own. The emphasis of Intel’s Data-Centric Innovation Day was so clearly on collaboration, with many major players in the IT and enterprise world contributing to the event. During his keynote, Shenoy was vocal about Intel’s broad set of partners and customers, emphasizing branching out and building a bigger business ecosystem.

Every technology showcased during the product launch was tied back to one of Intel’s global partners: AWS, Vodafone, Twitter, Microsoft, Alibaba, and other companies were featured and promoted through Intel’s own announcements. Featuring partners like this led to some very conversational panels on the nitty gritty of DX throughout the day’s events. But this collaborative approach to the technology also highlighted another aspect of Intel’s digital transformation journey.

Vokins said that, for Intel, the process of digital transformation is also a question of interpreting what’s happening in the world of business technology and turning that information into valuable insights to improve performance.

“We’re in a very fortunate position, given our market share, that we have huge amounts of information and resources and access to leading businesses. So we need to make sure that we can disseminate, understand and rearticulate that information back.”

Vokins emphasized the need to collaborate around each digital transformation insight, “so that we can all learn from it, and learn how customers are embracing technology to rapidly improve performance.”

Share this:
Continue Reading

Technology

How edge computing can boost business efficiency

Published

on

network
Share this:

Edge computing is about processing data as close to the source as possible, which reduces both latency and bandwidth use. This concept is seen as critical for furthering the Internet of Things and for driving the development of autonomous vehicles.

What is edge computing?

Edge computing is a decentralized approach to computing applied to networks (the opposite to cloud computing’s centralized approach). The concept relates to how a network stores its information. In edge computing, most data on a network is moved away from physical computers. For businesses, data is moved onto a private server.

Edge computing is especially useful in cases where a lot of data is generated. The approach allows for the successful triage of data locally so that some of it is processed locally, reducing the backhaul traffic to the central data repository. This is very useful in cases where many devices are connected together, as with the Internet of Things.

Edge computing helps to make the Industrial Internet of Things possible. This is an area of great value. McKinsey & Co. calculate that the Industrial Internet of Things will generate $7.5 trillion in value by 2025. The advantages here are to connect people to machine data that accelerate digital industrial transformation.

How can edge computing benefit business?

The advantages of edge computing are that it takes less time to move data and there are fewer are less hardware limitations and that hardware limitations are easily addressed. With conventional storage systems, hardware is normally required, and this can create a bottleneck that places a restriction on how much memory can be moved at any time point. The use of hardware also leads to slower data transfer speeds.

Furthermore, the costs of operating and maintaining the hardware are relatively more expensive.

Security is also stronger with edge computing, making edge computing systems harder for hackers to penetrate. This is because data is continually moving between network modes.

When data are moved throughout a network, they go through different security layers to ensure hackers cannot get into the system, but edge computing goes beyond this. More security layers are used because, instead of the data moving between the network nodes, the data moves from the Internet into the servers and onto the nodes. This provides an opportunity for creating additional firewalls and antivirus scans.

How are businesses using edge computing?

Businesses can derive many advantages from the edge computing concept. The edge process enables analytics and data gathering to occur at the source of the data. This enables companies to leverage resources from devices that are not necessarily continuously connected to a network like laptops, smartphones, tablets and sensors.

Autonomous vehicles and edge computing

Among the more specific examples is autonomous car technology. These are, in a sense, datacenters on wheels, and here edge computing plays a key role. To collect the high volumes of data, edge computing provides an advantage. In terms of data, Intel estimates that autonomous cars, with their many on-vehicle sensors, generate over 40 terabytes of data for each eight hours of driving. Given that this level of data cannot be easily sent to a cloud (and this also presents a safety risk in terms of delayed reactions), the use of edge computing becomes a necessity.

Security cameras and edge computing

A second example is with security systems. If a large complex is served by dozens of high-definition Internet of Things video cameras where data is continuously streaming that signal to a cloud server, these systems can be slow to respond. This is especially so if the security protocol is designed to respond to motion-detection. This set-up places a major strain on the building’s Internet infrastructure, with a high proportion of the bandwidth becoming consumed by a high volume of video footage.

With the edge concept, each camera would have an independent internal computer to run the motion-detecting application and then sent footage to the cloud server as needed. This improves efficiency and lowers bandwidth use.

Fleet management and edge computing

Edge computing also helps to improve the efficiency of fleet management. While a large volume of key performance data needs to be collected – wheels, brakes, battery, electrical – where such data requires a response, such as a potential brake failure, then some of this data needs to be collected and stored locally on the edge in order to minimize the risk of vehicle breakdown or accident.

An example of edge computing applied to fleet management is with trailer temperature. With most fleet monitoring systems, only temperature readings that are outside of a set range are reported back to fleet managers through telematics. The fleet manager then needs to assess whether or not there is a problem. However, with edge analytics, temperature readings can be analyzed onboard a vehicle and notified to the driver, empowering the driver to take steps to mitigate the temperature fluctuation.

Share this:
Continue Reading

Technology

VMware and AWS unleash hybrid cloud options in Canada

Published

on

cloud computing
Share this:

Cloud platforms are transforming the way organizations do business, and the competition between cloud providers is fierce. VMware and Amazon Web Services are partnering to provide cloud solutions to businesses in Canada.

At a roundtable in Toronto, data center company VMware, IT partner Scalar Decisions and AWS discussed how the new VMware Cloud on AWS service will open up new options for customers. The organizations hope that the new arrangement will act as a catalyst to get Canadian organizations, private and public, to the cloud.

Easing cloud migration for Canadian companies

The new offering from VMware and AWS looks to provide customers with powerful hybrid cloud options in order to help them benefit from AWS’ many capabilities. The new, on-demand service allows organizations working with VMware to extend, migrate and manage their cloud-based resources with the use of AWS services.

Sean Forkan, Vice President and Country Manager at VMware, stated that innovation in the public cloud is happening daily on a global scale, and thanks to VMware Cloud, Canadian companies can now benefit from these transformative shifts coming from within AWS.

The VMware Cloud service lives in the same region and availability as Amazon services, and is managed by VMware. Customers can be served both by AWS’ Montreal-based data centre or the U.S. data centre, depending on their data residency requirements. Over time both VMware and AWS hope to see a greater merger of the tools.

Peter Near, National Director of Solutions Engineering with VMware Canada, said that the transition to cloud services for businesses is not just a question of efficiency, but global performance. And while the majority of data sets on Canadian databases are not easy to migrate, Near predicted that the new offering from VMware and AWS provides these companies an ‘easy button’ for migration.

Transition to cloud has never been more popular

In a recent survey by multi-cloud management company RightScale, 95 percent of respondents said they are using cloud in some way. Hybrid and public cloud were far and away the most popular amongst adopters, with 85 percent of surveyed businesses citing some kind of hybrid cloud strategy, while only 10 percent of respondents cited the use of a single public cloud.

As Eric Gales, Director of AWS Canada, said during the roundtable, “It used to be that owning and operating infrastructure was an advantage.” According to Gales, eliminating the ownership and operation of costly infrastructure is at the heart of the pronounced increase in cloud adoption in Canada.

Gales noted that artificial intelligence and machine learning are also driving organizations towards the scalability and on-demand talent of public cloud services. AI and ML need a lot of computing, said Gales, and now VMware can use existing apps and workloads through AWS to accelerate and amplify the use of these apps. In terms of talent required to scale AI and ML tools, this could be a boon for medium-sized businesses, as the “surface area of new things they need to develop skills for or learn is lower,” said Gales.

Share this:
Continue Reading

Featured