Connect with us

Technology

Accelerating the future of privacy through SmartData agents

Published

on

Digital Life of Mind
Share this:

Imagine a future where you can communicate with your smartphone – or whatever digital extension of you exists at that time – through an evolved smart digital agent that readily understands you, your needs, and exists on your behalf to procure the things and experiences you want. What if it could do all this while protecting and securing your personal information, putting you firmly in control of your data?

Dr. George Tomko, University of Toronto

Dr. George Tomko, University of Toronto

Dr. George Tomko Ph.D, Expert-in-Residence at IPSI (Privacy, Security and Identity Institute) at the University of Toronto, Adjunct Professor in Computer Science at Ryerson University, and Neuroscientist, believes the time is ripe to address the privacy and ethical challenges we face today, and to put into place a system that will work for individuals, while delivering effective business performance and minimizing harms to society at large. I had the privilege of meeting George to discuss his brainchild, SmartData: the development of intelligent agents and the solution to data protection.

As AI explodes, we are witnessing incident after incident from technology mishaps to data breaches, to data misuse, and erroneous and even deadline outcomes. My recent post, Artificial Intelligence needs to Reset advances the need to take a step back, slow down the course of AI, and examine these events with a view to educate, fix, prevent and regulate towards effective and sustainable implementations.

Dr. Tomko is not new to the topic of privacy. He also invented Biometric Encryption as well as the Anonymous Database in the early 90’s.  His invention of SmartData was published SmartData: Privacy Meets Evolutionary Robotics, co-authored with Dr. Ann Cavoukian, former 3-term Privacy Commissioner in Ontario and inventor of Privacy by Design. This led to his current work, Smart Data Intelligent Agents, the subject of this article.

There is an inherent danger with the current model today. How the internet evolved was not its intended path. Tim Berners-Lee envisioned an open internet, owned by no one,

…an open platform that allows anyone to share information, access opportunities and collaborate across geographical boundaries…

This has been challenged by the spread of misinformation and propaganda online has exploded partly because of the way the advertising systems of large digital platforms such as Google or Facebook have been designed to hold people’s attention…

People are being distorted by very finely trained AIs that figure out how to distract them.

What has evolved is a system that’s failing. Tomko points to major corporations and digital gatekeepers who are accumulating the bulk of the world’s personal data:

He who has the personal data has the power, and as you accumulate more personal information (personally identifiable information, location, purchases, web surfing, social media), in effect you make it more difficult for competitors to get into the game. The current oligopoly of Facebook, Google, Amazon etc will make it more difficult for companies like Duck Duck Go and Akasha to thrive.

That would be okay if these companies were to utilize the data in accordance with the positive consent of the data subject for the primary purpose intended and protected it against data hacking. However, we know that’s not happening. Instead, they are using it for purposes not intended, selling the data to third parties, transferring it to government for surveillance, often without a warrant for probable cause.

Tomko asserts if Elon Musk and the late Stephen Hawking are correct about the potential of a dystopian-like AI popularized by Skynet in The Terminator series, this is likely if the AI has access to large amounts of personal data centralized into databases. While this implies an AI with malevolent intentions, humans are relentlessly innovative and Tomko argues for the importance of putting roadblocks in place before this happens.

Enter SmartData. This is the evolution of Privacy by Design, which shifts control from the organization and places it directly in the hands of the individual (the data subject).

SmartData empowers personal data by, in effect, wrapping it in a cloak of intelligence such that it now becomes the individual’s virtual proxy in cyberspace. No longer will personal data be shared or stored in the cloud as merely data, encrypted or otherwise; it will now be stored and shared as a constituent of the binary string specifying the neural weights of the entire SmartData agent. This agent proactively builds-in privacy, security and user preferences, right from the outset, not as an afterthought.

For SmartData to succeed, it requires a radical, new approach – with an effective separation from the centralized models which exist today.

Privacy Requires Decentralization and Distribution

Our current systems and policies present hurdles we need to overcome as privacy becomes the norm. The advent of Europe’s GDPR is already making waves and challenging business today. Through GDPR’s Article 20 (The Right to Data Portability) and Article 17 (The Right to Be Forgotten), the mechanisms to download personal data, plus the absolute deletion of data belie current directives and processes. Most systems ensure data redundancy, therefore data will always exist. Systems will need to evolve to fully comply with these GDPR mandates. In addition, customer transactions on private sites are collected, analyzed, shared and sometimes sold with a prevailing mindset that data ownership is at the organizational level.

Tomko explains the SmartData solution must be developed in an open source environment.

A company that says: “Trust me that the smart agent or app we developed has no “back-door” to leak or surreptitiously share your information,” just won’t cut it any longer. Open source enables hackers to verify this information. I believe that such a platform technology will result in an ecosystem that will grow, as long as there is a demand for privacy.

Within this environment, a data utility within the SmartData platform can request all personal data under GDPR-like regulations from the organizational database. As per the SmartData Security Structure, each subject’s personal data is then cleaned and collated into content categories e.g. A = MRI data, B = subscriber data. They will be de-identified, segmented, encrypted and placed in these locked boxes (files in the cloud) identified by categorized metatags. A “Trusted Enclave” like Intel’s SGX will be associated with each data subject’s personal data. The enclave will generate a public/private key pair and output the public key to encrypt the personal data by category.

Today, information is stored and accessed by location. If breaches occur, this practice increases the risk of exposure as information about data subjects are bundled together. By categorizing and storing personal information by content, this effectively prevents personal identity to be connected with the data itself. Only SmartData will know its data subjects and pointers to their unique personal information, accessed by a unique private key.

SmartData Security Structure, George Tomko

SmartData Security Structure, George Tomko

Ensuring Effective Performance while Maintaining Individual Privacy

Organizations who want to effectively utilize data to improve efficiencies and organizational performance will take a different route to achieve this. How do companies analyze and target effectively without exposing personal data? Tomko declares that using Federated Learning, to distribute data analytics such as Machine Learning(ML) is key:

Federated Learning provides an alternative to centralizing a set of data to train a machine learning algorithm, by leaving the training data at their source. For example, a machine learning algorithm can be downloaded to the myriad of smartphones, leveraging the smartphone data as training subsets. The different devices can now contribute to the knowledge and send back the trained parameters to the organization to aggregate.  We can also substitute smartphones with the secure enclaves that protect each data subject’s personal information.

Here’s how it would work: An organization wants to develop a particular application based on machine learning, which requires some category of personal data from a large number of data-subjects as a training set. Once it has received consent from the data subjects, it would download the learning algorithm to each subject’s trusted enclave. The relevant category of encrypted personal data would then be inputted, decrypted by the enclave’s secret key, and used as input to the machine learning algorithm. The trained learning weights from all data-subjects’ enclaves would then be sent to a master enclave within this network to aggregate the weights. This iteration would continue until the accuracies are optimized. Once the algorithm is optimized, the weights would then be sent to the organization. Tomko affirms,

 

The organization will only have the aggregated weights that had been optimized based on the personal data of many data subjects. They would not be able to reverse engineer and determine the personal data of any single data subject. The organization would never have access to anyone’s personal data, plaintext or otherwise, however, would be able to accomplish their data analytic objectives.

Federated Learning - Master Enclave, George Tomko

Federated Learning – Master Enclave, George Tomko

Building a Secure Personal Footprint in the Cloud

To ensure personal web transactions are secure, a person will instruct his SmartData agent to, for example, book a flight. The instruction is transmitted to the cloud using a secure protocol such as IPSec. This digital specification (a binary string) is decrypted and downloaded to one of many reconfigurable computers, which will interpret the instructions.

Natural language (NLP) would convert the verbal instructions into formal language, as well as the encoded communications, back and forth between subject and organization to facilitate the transaction, eliciting permission for passport and payment information. What’s different is the development of an agreement (stored on the Blockchain) that confirms consented terms of use between the parties. It also adds an incentive component through cryptocurrency that enables the data subject to be compensated for their information, if required. This mechanism would be used before every transaction to ensure transparency and expediency between parties.

Tomko realizes Blockchain has its limitations:

Everyone wants to remove the intermediary and the crypto environment is moving quickly. However, we can’t rely on Blockchain alone for privacy because it is transparent, and we can’t use it for computation because it is not scalable.

AI as it exists today is going through some stumbling blocks. Most experiments are largely within ANI: Artificial Narrow Intelligence, with models and solutions built for very specific domains, which cannot be transferred to adjacent domains. Deep Learning has its limitations. The goal of SmartData is to develop a smart digital personal assistant to serve as a proxy for the data-subject across varied transactions and contexts. Tomko illustrates,

With current Deep Learning techniques, different requests such as ‘Hey SmartData, buy me a copy of …” or “book me a flight to…” encompass different domains, and accordingly, require large sets of training data specific to that domain. The different domain-specific algorithms would then need to be strung together into an integrated whole, which, in effect, would become SmartData. This method would be lengthy, computationally costly and ultimately not very effective.

The promise of AI: to explain and understand the world around us and it has yet to reveal itself.

Tomko explains:

To date, standard Machine Learning (ML) cannot achieve incremental learning that is necessary for intelligent machines and lacks the ability to store learned concepts or skills in long-term memory and use them to compose and learn more sophisticated concepts or behaviors. To emulate the human brain to explain and generally model the world, it cannot be solely engineered. It has to be evolved within a framework of Thermodynamics, Dynamical Systems Theory and Embodied Cognition.

Embodied Cognition is a field of research that “emphasizes the formative role that both the agents’ body and the environment will play in the development of cognitive processes.” Put simply, these processes will be developed when these tightly coupled systems emerge from the real-time, goal-directed interactions between the agents and their environments, and in SmartData’s case, a virtual environment. Tomko notes the underlying foundation of intelligence (including language) is action.

Actions cannot be learned in the traditional ML way but must be evolved through embodied agents. The outcomes of these actions will determine whether the agent can satisfy the data subject’s needs.

Tomko references W. Ross Ashby, a cybernetics guru from the 50’s, who proposed that every agent has a set of essential variables which serve as its benchmark needs, and by which all of its perceptions and actions are measured against. The existential goal is to always satisfy its needs. By using this model (see below), we can train the agent to satisfy the data subject’s needs, and retain the subject’s code of ethics. Essential variables are identified that determine the threshold for low surprise or high surprise. Ideally, the agent should try to maintain a low-surprise and homeostatic state (within the manifold) to be satisfied. Anything outside the manifold, i.e., high surprise should be avoided. Tomko uses Ashby’s example of a mouse, who wants to survive. If a cat is introduced, a causal model of needs is built such that the mouse uses its sensory inputs compared to its benchmark needs to determine how it will act when a cat is present and maintain its life-giving states.

Apply this to individual privacy. As per Tomko,

The survival range will include parameters for privacy protection. Therefore, if the needs change or there is a modified environment or changing context the agent will modify its behavior automatically and adapt because its needs are the puppet-master.

This can be defined as a reward function. We reward actions that result in low surprise or low entropy. For data privacy, ideally, we want to avoid any potential actions that would lead to privacy violations equating to high surprise (and greater disorder).

 

Manifold of Needs, George Tomko

Manifold of Needs, George Tomko

Toronto’s Sidewalk Labs: The Need for Alternative Data Practices

At the time of writing this article, Dr. Ann Cavoukian, Expert-in-Residence at Ryerson University, former 3-term Privacy Commissioner, resigned as an advisor to Sidewalk Labs, in Toronto, a significant project powered by Alphabet, which aimed to develop one of the first smart cities of privacy in the world. Cavoukian’s resignation resulted in a media coup nationally because of her strong advocacy for individual privacy. She explains,

My reason for resigning from Sidewalk Labs is only the tip of the iceberg of a much greater issue in our digitally oriented society.  The escalation of personally identified information being housed in central databases, controlled by a few dominant players, with the potential of being hacked and used for unintended secondary uses, is a persistent threat to our continued functioning as a free and open society.

Organizations in possession of the most personal information about users tend to be the most powerful. Google, Facebook and Amazon are but a few examples in the private sector… As a result, our privacy is being infringed upon, our freedom of expression diminished, and our collective knowledge base outsourced to a few organizations who are, in effect,  involved in surveillance fascism. In this context, these organizations may be viewed as bad actors; accordingly, we must provide individuals with a viable alternative…

The alternative to centralization of personal data storage and computation is decentralization – place all personal data in the hands of the data-subject to whom it relates, ensure that it is encrypted, and create a system where computations may be performed on the encrypted data, in a distributed manner… This is the direction that we must take, and there are now examples of small startups using the blockchain as a backbone infrastructure, taking that direction.  SmartData, Enigma, Oasis Labs, and Tim Berners-Lee’s Solid platform are all developing methods to, among other things, store personal information in a decentralized manner.

Other supporters of Dr. George Tomko concur:

Dr. Don Borrett, a practicing neurologist with a background in evolutionary robotics, with a Masters from the Institute for the History and Philosophy of Science and Technology for the University of Toronto states:

By putting control of personal data back into the hands of the individual, the SmartData initiative provides a framework by which respect for the individual and responsibility for the collective good can be both accommodated.

Bruce Pardy is a Law Professor at Queen’s University, who has written on a wide range of legal topics: human rights, climate change policy, free markets, and economic liberty, among others and he declares:

The SmartData concept is not just another appeal for companies to do better to protect personal information. Instead, it proposes to transform the privacy landscape. SmartData technology promises to give individuals the bargaining power to set their own terms for the use of their data and thereby to unleash genuine market forces that compel data-collecting companies to compete to meet customer expectations.

Dr. Tomko is correct! The time is indeed ripe, and SideWalk Labs, an important experiment that will vault us into the future, is an example of the journey many companies must take to propel us into an inevitability where privacy is commonplace.

This originally appeared om Forbes.

Share this:

Technology

How are AI tools like ChatGPT deployed in retail?

“ChatGPT is no doubt amplifying customers’ shopping experiences, leading to more sales and profit for retailers,” says one tech founder.

Published

on

Share this:

While many people are asking online AI tool ChatGPT all kinds of fun questions, it also being put to work by retailers, to improve the way they do business

And the use cases are wide in variety. 

In an article from the National Automobile Dealers Association, Fiat and Kia Germany have begun using ChatGPT to answer questions in an interactive digital showroom. The report went on to describe how it could dispense information about vehicles, financing options, and provide a more personalized experience for customers. 

In another example, retailers can now create a profile of a customer’s sizing, previous purchases, and browsing history, and integrate ChatGPT into their recommendation engines to personalize product suggestions for customers. In a similar track, software company ElifTech created ElifMail, an email marketing solution powered by ChatGPT, which helps automate the process of responding to customer inquiries, so that the retailer can focus on more critical tasks.

Inside Intelligence reported that French supermarket chain Carrefour is “experimenting with ChatGPT and generative AI to create videos answering common customer questions, such as how to eat healthier for less.”

ChatGPT is also helpful for retailers when it comes to visual product searches, noted Ryan Faber, founder of Copymatic, a cutting-edge AI-powered platform that helps businesses create content.  

“It comes with visual recognition technology that retailers are utilizing to attract and give more options to customers,” he explains. 

“Before, only text and voice searches were available, but with visual product searches, customers are getting more satisfied as they can easily upload an image and get exact results. ChatGPT is no doubt amplifying customers’ shopping experiences, leading to more sales and profit for retailers.”

Instacart is already using the service as a chatbot, reports Wall Street Journal. Chatbots are inherently able to handle a large volume of customer inquiries and provide personalized recommendations, product information, and support.

One benefit for the consumer is that the purchase could be completed entirely within the chat window, noted Abby McNally, Director of Planning and Awareness Media at Collective Measures, a marketing agency based in Minneapolis.

“There’s no need to navigate to a different page or to enter any credit card information. This can lead to higher conversion rates and more sales.” Backing up this point, a LivePerson survey showed that 68 percent of consumers become more loyal to a brand if they can resolve issues through a chatbot, and 60 percent of those aged 18 to 24 actually prefer a chatbot interaction to a human’s. 

McNally added that AI chatbots in retail will come in handy as a tool for employee training. A generative AI chatbot could draft fictional customer service scenarios for associates to respond to.

Nevertheless, as anyone who has used ChatGPT, it’s clear sometimes the responses can be flawed, inaccurate, or incomplete. Chani Jos, a freelance web programmer in Montreal, who has worked for Google, and Waze, said she has concerns that it has “the potential to be misused for malicious purposes, such as spreading misinformation or conducting scams.” She also warned that the programming is not yet at a point when it will replicate real-life needs for customer service.

Since ChatGPT and similar generative AI tools have been created on an open AI model, they are largely accessible to anyone. Because of this, we’re sure to see more startups using GPT3 to build various tools. 

“ChatGPT and AI tools can be a great for small businesses to save time and effort on their marketing, product description analyzation and customer service, while achieving better results,” said marketing consultant and cofounder of business consulting firm 172Group, Shari Wright Pilo. She’s been researching AI and ChatGPT to help her clients. 

One example? Tasked with analyzing a client’s executive summary, Wright Pilo asked ChatGPT to develop a marketing plan that included social and email campaigns for special offers.

After tweaking prompts, she and the client were able to create a content calendar for three months, complete with post copy, calls to action, and visual suggestions. This took 20 minutes, not including revision time.

The client transferred the plan to a Google sheet, where she personalized the posts to match her tone and branding, creating visuals in Canva. 

The result was the client having a complete social media and email marketing strategy for the next three months, created within two days, freeing up her time to focus on other business tasks and generate more revenue. 

“Even better, she’s now engaging with her audience on social media, growing her email list, and keeping her customers happy,” noted Wright Pilo.

Share this:
Continue Reading

Business

Only 13% of Web3 founding teams include any women, BCG study finds

A look into a BCG report highlighting gender disparity in Web3 and STEM.

Published

on

Share this:

It’s shocking that 2023 still sees vast gender disparity in entire industries. Unfortunately, the STEM and sub-industries like Web3 see it the most. 

If you haven’t heard, Web3 is the latest cryptocurrency technology for a blockchain-based internet. 

The Boston Consulting Group (BCG) found that only 13% of Web3 companies included any women on their founding teams. Another key finding was that only 3% of Web3 company founding teams consisted of all women. 

Talk about archaic, for such a progressive industry. 

We dove into the report to understand the severity of that disparity and what companies can do about it. Let’s start with some of the report’s key findings on founders:

  • 13% of Web3 company founding teams have at least one woman
  • 3% of Web3 company founding teams encompass all women
  • 93% of Web3 founders are men

These findings above remain consistent not only in North America, but also in the Asia-Pacific and Europe. Now, this disparity unfortunately continues even when you look at the wider workforce of Web3 companies:

  • 73% of Web3 companies’ entire workforce are men
  • 88% of technical roles at Web3 companies are held by men

BCG also examined the role of women in Web3 founding teams by startup stage and funding amount. Sadly, the bigger the investment, the less likely a woman was to sit on the founding team. Only 7% of Web3 companies with $1B invested had women in the founding teams. Similarly, companies that received between $500M to $999M had men as founding teams.

STEM companies show similar results. While the US Census demonstrates more women achieving STEM roles, the disparity is still present. The BCG’s report backs this as well:

  • 33% of STEM company workforces are women
  • 25% of technical roles at STEM companies are held by women

What does BCG propose we do about it? Luckily, the early nature of Web3 offers time to rectify the gender disparity. Here are some strategies discussed:

  • Monitor the data: Granular, objective data collection will keep track of female representation within a company’s workforce and founders. 
  • Include women on VC investment teams: All-male investment teams are more likely to garner all-make founding teams. 
  • Create inclusive brand experiences: The Web3 experience should cater to a broad audience. 
  • Stay close to regulators: Collaborate with government and organizational entities to shape regulations for this new industry.
  • Build mentorship and support opportunities: Diverse networks and mentorship opportunities can keep companies in check with gender equality. 

Read BCG’s full press release.

Share this:
Continue Reading

Business

Are realtors too valuable to be disrupted by technology?

Tens of billions of venture capital dollars go into proptech every year. But realtors remain critical middlemen for most consumers. Is this just the way it will always be? Here’s a look at how tech is changing residential real estate – and how it’s not.

Published

on

Share this:

The tech industry repeatedly sees itself as a disruptor — particularly of industries with inefficient models with unnecessary costs baked in.

Why shouldn’t real estate be a prime target for tech?

As Forbes notes:

“Real estate is the only mammoth-size market remaining in which middlemen (brokers/agents) have complete control of the process. The operative members of the transaction (buyers/sellers) are withheld from direct communication and limited in resources and transparency. They are at the mercy of the middlemen in a world where other industries are constantly being refreshed, redesigned, and automated.”

Still, Canadian (and American) realtors are, to date, disruption resistant. Canadian realtors extract billions in value every year for their work. This is just how real estate works in this country, but it is kind of odd. Especially because Canada’s housing crisis is exactly that: a crisis.

Canada needs to build 3.5 million extra homes by 2030 to ensure affordable housing for everyone living in the country. That’s on top of the expected build out of 2.3 million homes that are currently planned.

That’s a shocking number when you consider the United States, with ten times the population, is short a relatively modest 6.5 million homes.

This housing gap means some version of the following story is happening in Canada basically every single week:

A seller wants to put their home on the market. They sign with a realtor who shares data on how to price the property, photographs it, lists it on MLS and advertises it. Depending on the seller, the realtor may provide significant guidance on the process of selling a home. People tend to get nervous when they’re selling their single biggest asset.

Still, the whole process can be over in a matter of weeks — a win for sellers, presumably. Well, sort of.

This process can be efficient in a hot market, but it also leaves many sellers with an odd taste in their mouths as they watch their realtor and their buyer’s realtor walk away with commissions of thousands, if not tens of thousands, of their dollars.

So, why hasn’t tech made more headway in bleeding out these seemingly unnecessary costs for buyers and sellers?

It’s not for a lack of new models, innovation, and capital spending. Investors allocated more than $32 billion USD into proptech companies in 2021. (‘Proptech’ just means technology solutions that enable the buying and selling of residential and commercial real estate). By 2028, the global proptech market is expected to reach $64.3 billion USD.

The investment is there. But so are the realtors. So, what changes are happening?

Proptech platforms are creating more informed buyers and sellers

Consumers are seeing the results of the money that has poured into proptech over the last decade. During the home-buying frenzy that followed a certain pandemic, many buyers toured properties virtually, and made buying decisions without ever being inside the place they’d soon call home.

But that’s just the latest evolution of real estate technology for consumers. Much of the first wave of proptech has already become second nature for many of us. We all have access to powerful, data-driven tools and platforms to aid us when it’s time to buy or sell.

Just a few examples:

  • Zillow is a one-stop digital marketplace that serves home buyers and sellers, as well as renters and landlords. It goes well beyond MLS, with deep resources and functionality like property valuation estimates. It’s the largest real estate website in the U.S. with over 60 million monthly views – and it’s increasingly popular in Canada.
  • Redfin is a real estate brokerage that offers lower than standard brokerage fees for its agents to sell residential homes. The company operates in both Canada and the U.S.
  • Trulia is similar to Zillow but offers additional functionality like crime maps by area, neighborhood profiles, and estimated monthly property upkeep costs. Trulia was acquired by Zillow in 2014 but continues to run as a separate platform.
  • Bōde is a Canadian platform that enables sellers to list their properties for free. Then they market the listing on platforms like Zillow and MLS. When a buyer and seller connect, Bōde facilitates the sale of the home and charges a 1% fee (up to a maximum of $10,000) on the final sale price. No realtor is involved.

While consumers love platforms like these and are doing more research on their own, they still gravitate to realtors when it comes time to sell or buy. A recent CBC article noted that:

“While specific numbers are hard to come by, all indications suggest that private sales make up a tiny sliver of overall real estate deals in Canada. For example, For Sale By Owner recently had some 116 listings in all of Ontario, while some mid-sized cities in the province showed more than 1,000 on MLS.”

Change is coming for everyone – from buyers to sellers to realtors

Still, the forecasts suggest this initial wave of proptech innovation may lead to more significant changes in the years to come. 

A much-quoted Oxford University study from 2013 found that “automation is projected to replace 50% of all current jobs in the next two decades. The same study predicts automation is 86% likely to replace traditional “real estate sales agents” and 97% likely to replace “real estate brokers”.” By late 2020, technology had replaced over 60 million jobs in the U.S. alone, with the World Economic Forum predicting tens of millions more to come, with fully 50% of jobs done by machines by 2025. 

It’s clear that the rate of automation isn’t exactly slowing down.

Blockchain, the distributed ledger that promises to destroy unnecessary middlemen across industries, offers the potential ability to reduce the need for realtors, through its ability to protect against fraudulent activity through decentralized smart contracts. 

But widespread adoption of blockchain technology hasn’t happened in any major industry, much less a massive asset class like real estate. And blockchain alone doesn’t eliminate the need for home buyers and sellers to get expert counsel from someone during a transaction.

And AI has promise and potential, sure. It can already do things with data that no human can. But buyers and sellers seem to consistently value empathy, human interaction, negotiation skills, and a realtor’s personalized knowledge of a community or property type. This is especially true when someone is making the life-altering choice to buy or sell a house. If it was your house, would you want the robot or the person?

So far, most Canadians are choosing the person. (The same is even true with another major life purchase, as we’ve recently reported.)

But there are more changes afoot.

Think back to that theoretical seller that sees their house sold in days and in return sacrifices tens of thousands of dollars in commissions. Is that a good deal for them? Maybe not.

That insight is at the root of Bid My Listing, a new startup from entrepreneur Matt Proman and real estate bigwig Josh Altman.

Bid My Listing enables sellers to solicit bids from realtors to list their house. As Proman told Entrepreneur.com:

“I had a lot of agents knocking on my door, leaving their business cards that they wanted to represent me in the transaction.”

Proman thought his Long Island home would move quickly and signed a six-month exclusive listing agreement with an agent. “I waited and waited and waited,” he said. “And I watched two other houses sell on my block.”

“I said, ‘I will never, for any of my other houses, give my listing away for free. The next time the agents have to put their money where their mouth is and have skin in the game.

So, while realtors may exist long into Canada’s real estate future, tech may eventually create major changes in their roles and how they’re compensated. They’re likely to find themselves having to adapt to a changing landscape where buyers and sellers want more value for the commissions they pay on a real estate transaction.

If they’re willing to pay them at all.

Share this:
Continue Reading

Featured