The Inaugural Responsible AI Ecosystem Summit Paves a Pathway towards Inclusive Economic Prosperity

Collage1
This post was written by Hessie Jones and Ryan Panela

On October 24, 2024, Women in AI Ethics launched its inaugural Responsible AI Ecosystem Summit in New York City, hosted in collaboration with the Canadian Consulate General in New York and Altitude Accelerator. A delegation of investors, researchers, founders and practitioners and innovators, including strong participation from Canada, convened to build a responsible and ethical AI ecosystem advocating for human-centered design. 

Mia Sha-Dand, CEO of Lighthouse3 and Founder of Women in AI Ethics emphasized,  

“This summit reflects our core mission and key role as a catalyst in the global movement 

towards sustainable and responsible AI. While AI presents many benefits, there is an urgent need for new AI solutions that minimize risks and ensure benefits from AI are distributed equitably. We believe that a thriving responsible AI ecosystem is the pathway to new opportunities, economic growth, prosperity, and vibrant technological futures that include all of us.” 

This inaugural event covered the following critical topics: 

  • Canadian support for diverse founders through the Canadian Technology Accelerator 
  • AI Safety Alignment 
  • AI and Privacy 
  • Funding Diverse Founders 

Our esteemed speakers included: Patricia Thaine, Co-Founder & CEO of Private AI; Jurgita Miseviciute, Head of Public Policy and Government Affairs, Proton; Saima Fancy, Senior Privacy Specialist, Ontario Health; Aakanksha, Member of Technical Staff in the Safety team, Cohere; Giselle Melo, Managing Partner of Matr Ventures; Gayatri Sarkar, Owner and CEO, owner of Advaita Capital; Fadwa Mohanna, Founder and CEO of One37 

Key Takeaways:

Integration of Trust and AI

Safia Morsly-Fikai, Trade Commissioner, Consulate General of Canada in New York interviewed Fadwa Mohanna of One37, which enables businesses and users to connect, exchange, and verify data. Mohanna, an alumnus of the Canadian Technology Accelerator (CTA), which helps diverse Canadian entrepreneurs with high potential businesses expand in the U.S. market. 

Mohanna has become a pioneer in secure identity-based authentication, reducing fraud and ensuring seamless, safe and secure AI-powered user interactions.  

One37 allows for the verification of incoming and outgoing information between business and consumers through verifiable credentials and effectively uses QR code authentication where customer data remains in their digital wallets.  

One37’s novel integration with IBM’s WatsonAI will, as per Mohanna, transforms how business operates. New features including chatbots for bill payments, booking services or dispute resolution across financial institutions can be effective while maintaining privacy and preventing the transmission of PII (personally identifiable information) between the end consumer and the business. 

AI Safety and Alignment

AI safety focuses on ensuring AI systems operate safely and ethically without causing harm or unintended consequences. As large language models become more prevalent, choosing the right training data for safety alignment is crucial. However, this raises the question: Alignment to what? Whose values? Despite global AI use, safety measures often prioritize Western-centric concerns and homogeneous, and contexts to one-language.  

Data diversity and multilingual datasets play a vital role in minimizing AI-related harms. The challenge lies in optimizing large language models for various languages and cultural preferences while addressing both global and local issues. Saima Fancy discussed these concerns with Aakanksha from Cohere, in safeguarding AI systems designed to serve global populations. 

Saima Fancy is the Senior Privacy Specialist at Ontario Health and has spent a career at the intersection of privacy, security, and technology, with a specific focus on the emergence of generative and agentic Al. Fancy’s cross-functional experience in engineering, data privacy, and security has spanned two decades and she is an internationally recognized advocate for responsible Al practices. Her work is focused on protecting consumer privacy rights and promoting commercial collaborations to drive the development of privacy-preserving Al solutions. 

Aakanksha is currently a Member of Technical Staff in the Safety team at Cohere. She was also a research scholar with Cohere for AI, where she worked on multilingual safety. She holds a Master’s degree in computer science from New York University and has had research experience in robotics, reinforcement learning, and demand forecasting. 

Aakanksha confirmed that most of today’s models are trained primarily in English, German and French and it’s increasingly clear there is linguistic inequity and lack of diversity in most training data. Cohere is dedicated to developing models trained on multi-lingual data and enlists participation from many countries, across cultures and languages to ensure representation.   

How does Cohere ensure context to the original language is maintained and not inadvertently misrepresented in translation? If sentences have the same meaning across languages, embedding models will demonstrate high semantic similarity between the pieces of text even though they are in differing languages. Cohere’s models do not contain an automatic translation prior to linguistic encoding which is meant to preserve the specific nuances across languages. This can be useful when it comes to translating or summarizing medical notes and, as Aakanksha noted, communication of medication outcomes and treatment plans in a manner that is culturally appropriate to the patient. 

Is there a risk to the data that is collected for training? Cohere does not store data but rather the raw model is provided to the enterprise for their respective use and fine-tuning. In addition, synthetic data is used to train the multilingual models, as it is necessary for data augmentation and may, in the process, remove some of the biases that are present in human data. 

AI Privacy

Privacy is a fundamental right. It is essential to an individual’s dignity and enjoyment of personal freedoms. The right to privacy is enshrined in Canada’s federal laws and in the constitutions of the majority of countries around the world. The AI boom and the proliferation of large language models (LLMs) poses new challenges for privacy. As personal information becomes part of models’ training data, it presents a serious risk of private information being leaked through model output and exposed through third-party hacks. Individual control over personal information now seems elusive. We heard from Patricia Thaine of Private AI and Jurgita Miseviciute of Proton, two leading privacy-centered technology companies, who shared how their respective organizations are protecting user privacy and ensuring security of their personal information. 

Proton 

Jurgita Miseviciute is the Head of Public Policy and Government Affairs at Proton. She is responsible for Proton’s global public policy and antitrust efforts and leads Proton’s engagement with politicians, governments, regulatory agencies, and other relevant stakeholders worldwide. Miseviciute spoke about the dawn of Proton and Andy Yen, CEO’s vision for their organization. 

Proton is Swiss based technology company that is built on the philosophy that the data privacy is at the forefront of bettering the internet and technology landscape. Proton was born in 2014, an initiative launched by Andy Yen and a group of scientists who met at CERN, the European Organization for Nuclear Research.  

After Edward Snowdon’s revelations about the NSA surveillance program, the founders launched a crowdfunding campaign for one simple vision: “to remake the internet in a way that is private by default, and serves the interest of all of society, and not just the interest of a few Silicon Valley tech giants.”  

“Proton has since grown to a global privacy powerhouse. What started as Proton Mail, now the world’s largest encrypted email service, has blossomed into an ecosystem of privacy-centric products including Proton VPN, Proton Calendar, Proton Drive, and Proton Pass. These services have become the vanguard of a movement that puts user privacy first, protecting over 100 million accounts worldwide and employing over 500 people.” 

Mail was the first product launched by the company.  Proton is end-to-end encrypted, and the company does not hold the keys needed to decrypt user data, which means it is unable to access user data even if it wanted to. Miseviciute stresses “This allows the user to take back their privacy. Everything that is written in email stops with the user.” The company structure removes their position as the middleman between the user and potential government access, effectively stopping the latter from requesting access to user data. 

Proton believes privacy is integral to AI models, and rather than AI models adding privacy components after the fact, they should be integrated into the technology from the beginning. For all Proton products, AI data is secure and will not be used for training without consent. 

Proton’s belief is that AI models should be open source to distribute overall capacity, instead of being in control by a handful of organizations. 

Private AI 

Patricia Thaine, Co-Founder & CEO of Private AI, a Microsoft-backed startup who raised their Series A led by the BDC. Private AI was named a 2023 Technology Pioneer by the World Economic Forum and a Gartner Cool Vendor. Thaine was on Maclean’s magazine Power List 2024 for being one of the top 100 Canadians shaping the country. 

Thaine pointed to the exposure of PII and its use in AI models as the main concern in modern technology. PrivateAI is an AI model aimed at identifying and suppressing PII through redaction and pseudonymization. While the idea is not new, PrivateAI uses a structure that increases overall performance and accuracy. Thaine indicated that regular expressions, which identify common patterns in PII expression and out of the box models are not effective, as she states, “Humans do not always write or speak in a manner that can be accurately interpreted by their models; however, the integration of AI into these systems can help to improve overall performance.” 

To build their model it begins with identifying what is PII and what risk does each type of information contain.  Private AI’s model not only allows the user to select the type of PII to detect and remediate; it also allows them to keep up to date with all privacy legislations. 

When it comes to model training with sensitive information, Thaine reveals that data can be decoded from the embedding space and reveals much of the sensitive information which was used for training.  She adds that PII in training data should be cleaned and/or anonymized prior to model training. 

PrivateAI can now be integrated with other LLM models such has ChatGPT. Thaine says that PII is redacted prior to ChatGPT input; output is received from ChatGPT and only then is where PII is reinserted into the output text. LLMs do not gain access to sensitive information. 

Funding Diverse Founders

Artificial Intelligence (AI) has been a significant area of research and investment for many decades. In 2023, venture capital (VC) investment in Generative AI reached $21.3 billion globally compared to just $1 billion in 2018. 

Despite the surge in funding, VCs face a new landscape of regulatory uncertainty and growing ethical risks related to the development of LLMs. Founders are faced with risk of market saturation and a challenging path to monetization especially in developing a moat around LLMs which are controlled by a few large organizations. 

In addition, women and minorities are drastically underrepresented in venture capital while women-led tech startups only get a fraction of total tech VC funding, which threatens to further exacerbate the inequity in this space. 

I led this discussion with Giselle Melo, Managing Director, MATR Ventures and Gayatri Sarkar, Owner of Advaita Capital.   

Gayatri Sarkar is the owner of Advaita Capital, one of the few/only growth VC fund in the 

US owned by a woman-POC. They invest in generative AI, deep tech and decarbonization advancing the human race. Cheque sizes are $10-50M+ and they have invested in Stripe, Epic Games, Neuralink, Cohere and other top generative ai companies as well. Sarkar was awarded Global Leader under 40 for championing $100B+ combined capital in gender advocacy through She-VC podcast. 

Before starting Matr Ventures, Giselle was a Partner and Head of Investment Banking at a Canadian boutique advisory firm, with over 5 billion in buy/sell transactions for the wealth management industry, including Banks, Private Wealth, Institutional firms, and family offices. Giselle is also a former exited founder with a 13-year track record, leading software systems, machine learning design, and engineering across multiple sectors. As the Managing Partner of Matr Ventures, Giselle is known for her strategic acumen and dedication to supporting high-growth, deep-tech investments. She is also an Entrepreneur-in-Residence with Altitude Accelerator. 

The current landscape where women and minority founders are disproportionately funded in the startup technology space.  While that was highlighted at least 5 years ago, not much has changed.  Overall, while there has been increased awareness and some initiatives to support underrepresented founders, the statistics show that significant disparities persist, and in some cases, have worsened over the past five years. 

When it comes to pitch decks slide-by-slide attention from investors varies greatly amongst teams of different demographics. In 2023, VCs spent 66% more time on all-female team sections compared to all-male team slides. VCs also spent the most time on minority teams’ team slides, 20% more than all-white teams. What was not surprising: all-male fundraising asks sections received 25% more time than all-female sections. For many founders, this is a function of an investment sector that has been dominated by males for many years.  

As female, persons-of-colour VC fund owners, the statistics are not surprising, however both contend that most deep tech companies do not have a DEI (Diversity Equity and Inclusion) agenda. Melo acknowledged that MATR is a performance-based fund investing in late seed to Series A deep tech software companies led by inclusive teams.  

The gap in dollars raised between all-female and all-male teams widened for the second consecutive year. All-female teams with minority members saw the most significant increase in fundraising times while securing the least amount of capital among all demographics. On average, all-female teams raised 43% less than their all-male counterparts, while diverse teams raised 26% less than all-white teams.  

For Sarkar, and a fund that stands alone as a female-led, person-of colour-led fund at the series B stage, she has seen zero deals from female founders. Sarkar recognizes there is still more work that needs to be done. She reflects, 

“Many women find themselves having to IPO their companies after Series A and B rounds because they struggle to raise funds for Series C and D. The lack of women writing larger checks is a significant issue. Venture capital, once a nascent asset class, has evolved but it remains challenging to secure a spot on the cap tables of certain firms.” 

And while having a champion that vouches for your company can make all the difference, Sarkar stresses that the hurdles they face as a growth fund when it comes to board approvals are symptoms of a broader systemic problem. One issue is the scarcity of women leading growth funds and when she started Advaita, she was advised to raise an early-stage fund. 

For Melo, when asked about the challenge of closed networks and who decides who gets to be on the cap table, she acknowledged at Series A she has the flexibility to decide. With the pervasiveness of applied AI and deep tech, her robust network of deep tech subject matter experts and commercialization leaders who may also be investors provides an additional layer of skill sets to interrogate models, scrutinize the technology, and ask the hard questions. This provides a unique service to help derisk investments during the due diligence process and support the growth of her portfolio companies. It also creates opportunities for investors to access untapped investment opportunities. 

While Sarkar and Melo represent funds with the intention to bring more diversity into the tech and investment ecosystem, clearly this is an uphill battle that will take time. 

Summit Reflections

The learning, the conversations, the successes, the awareness and the numbers that showed up for this important event painted a clear picture of the enormous change that will emerge in the coming years.  Responsible AI and ethics are now mainstream. But it will require education and investment in resources to make startups truly AI ready.  Altitude Accelerator is committed to doing this and enabling success as AI evolves. It will also require a village to make this happen. We have that village.

About: Ryan Panela is a PhD Student in the Department of Psychology, University of Toronto and Rotman Research Institute, Baycrest Academy for Research and Education; and MSc Student in the Department of Computing, Goldsmiths, University of London, UK. 

Recent Posts