Perspective
Our Path to Responsible AI at With
Our Path to Responsible AI at With
Our Path to Responsible AI at With
Derek Vaz
Derek Vaz
—
Nov 27, 2025
We have been working to develop our practices, processes, and services in the AI era alongside The Opening Door. Today we’re sharing the progress we have made as well as our living guideline document to support those interested in using AI responsibly.
We have been working to develop our practices, processes, and services in the AI era alongside The Opening Door. Today we’re sharing the progress we have made as well as our living guideline document to support those interested in using AI responsibly.
We have been working to develop our practices, processes, and services in the AI era alongside The Opening Door. Today we’re sharing the progress we have made as well as our living guideline document to support those interested in using AI responsibly.
Over the last couple of years our studio started to see the maturity of AI tools turn toward a pivotal point. The adoption of these systems has seemingly permeated every aspect of the tools we use in our field.
Recently, we have seen the consequences of AI proliferation take hold, from how generative models may be impeding cognitive development and worsening mental health, to the impacts the underlying infrastructure powering its usage has on our environment and communities, to the harms AI companies exacerbate through state surveillance and military crises.
As a design studio whose values are rooted in social and climate justice, we recognized that the adoption of systems marketed as “AI” needed to be better understood, both in their potential but also their impact, to assess why, how, and where we may use them.
Where we started
At the start of the year, we hosted internal workshops to document and discuss our uses, concerns, and ideas for how we use AI in our daily lives. If we were to imagine a future in which we use AI for the benefit of both our studio and our partners, we needed to begin in the way we start any work with our partners: Establishing foundational knowledge, sharing our concerns, and imagining potential futures.
Early adopters within our team shared ways they had used AI for everything from AI-assisted search, to transcribing and synthesizing audio recordings, to augmenting image editing.
A few team members were prescient about the concerns about employing contemporary AI, including: privacy and transparency of data shared with AI models, the ecological and community impacts of AI usage, job loss due to increased automation, labour exploitation to help train and grow these systems, and the impact of AI on learning, creativity and critical thinking.
Summed up, we had more questions than answers when it came to the potential application of AI. To figure out how we move forward, we needed to deepen our understanding and develop principles and guidelines for responsible use.
Partnering with The Opening Door
As we often do, we turned to a specialist to support and facilitate this exploration - in this case, Rose Genele. Rose is a responsible AI practitioner and applied AI ethicist with years of experience in the tech industry. She sits on the boards of the Canadian Centre for Ethics and Corporate Policy and Volcano Theatre, and is a member of the International Association for Safe & Ethical AI.
As the organizer of the Toronto chapter of the global All Tech is Human ethical technology community, which I am part of, Rose shared our urgent but cautious perspective on the rapid advancement of AI. Her practice, The Opening Door, seeks to support the responsible development and use of AI systems.
Together, we partnered with the goal of deepening our understanding of ‘AI’ systems and co-developing policies and contractual terms that would guide both our internal practices and services for our client partners, respectively.
Our work together involved numerous activities including pre‑reads and presentations, a collaborative heatmap activity, and post‑session feedback that shaped the guidelines and contractual updates we ultimately co-created.
Through our discussions and facilitated learning, we came away with a deeper, shared understanding of AI as it is understood today and how it affects ourselves, our work, and society at large.
The Mythology of AI
First, the team came to understand what it means when we talk about “AI” - a broad and purposefully opaque term that applies to numerous underlying technologies and practices.
AI today is experienced through closed, large language models like OpenAI’s ChatGPT or Anthropic’s Claude. Those systems are trained on large sets of data, rewarded or penalized for their responses, and most notably are able to simulate human-like conversations.
That same familiarity masks the fact that these models are still simply prediction-based algorithms versus having true intelligence. (Intelligence itself is something not consistently defined.) As they subtly warn you their results, while sounding very confident, can be inaccurate or simply false.
These themes were discussed in ongoing conversations we had at our studio’s annual retreat.
“We call it Intelligence, we say hallucinations. We call these, like, systems, these predictive systems ‘neural networks’ with ‘neurons’. So they're intentionally aligning themselves to kind of like a human replacement in the language.”
— Design Director, With
“I think a lot of my assumptions around AI and how… the industry is actually operating at the moment were not accurate or incomplete.”
— Designer, With
In the end, our general consensus was the promise of these systems - pattern recognition and generative abilities - so far have been outweighed by the costs of maintaining them at scale.
Before we moved forward we needed to understand these impacts more deeply.
The Bad and the Ugly
Reflecting our earlier concerns from an intersectional lens with The Opening Door raised further challenges that were important to name, including:
Environmental and Economic Tensions: The race to build AI infrastructure has been the priority of governments around the world, including here in Canada, for the perceived economic benefits it is suggested to create. But at what cost? The loss of fresh water required to cool data centres alone, even in a country rich with the resource, is a challenge no one has a clear answer to solve.
Creative Labour Exploitation: Individual creative capital is simultaneously threatened by systems that are trained on unknown amounts of stolen intellectual property, while being leveraged to displace the skilled work used to create that original value.
Geopolitical Implications: The data collected by AI models, just as in social media use, can unknowingly be used to further the military application. These systems pose serious ethical concerns as to how our data is used to further conflicts, displacement and in genocide in the world.
It should be noted, many of these concerns are relevant to technology we have used prior to generative AI adoption, like social media. However, the 'AI Infrastructure Gold Rush’ to support the scale of these systems amplifies the impact and severity of these issues as unprecedented amounts of private and public investment is being used to drive this next wave of technology.
As we publish this article, even the markets who were bullish about the potential of AI have started to correct as we approach the trough of disillusionment around AI.
What it means for our partners
As creative professionals, we need to understand where these systems truly do empower us to create impact in the work we do for our partners, but not at the cost of harm to people, their livelihood, or environment.
Following our discussions, we engaged in activities and exercises to determine a set of principles and prohibited uses of AI that we felt would violate those principles.
Additionally, we wanted to ensure we explore concerns around privacy and consent to ensure those were appropriately considered in our partnership agreements.
When we use systems that are fundamentally dependent on the data they have access to, seeking consent, tracking usage and being accountable to how these systems use our data are commitments we have made to our partners to ensure their views and intellectual property are protected.
Those commitments have now become part of our standard parts of our agreements with all our partners. These commitments, like our recent code of ethics, bind both us and our partners to be responsible and transparent about the use of AI in one another’s respective works to ensure the systems we use aren’t benefiting private entities who may repurpose them for their own capital gain.
The protection of ours and our partners’ shared creative works to the benefit of these closed systems is the same reason we ensured our team opted out of our Figma files being used to train Figma’s AI models.
The Guidelines
The outcome of our work together with The Opening Door is an open and living AI Guideline document that we encourage others to review to inform your own responsible AI process.
These guidelines reflect a belief our studio maintained through the entire process: For all the hype around the value of AI systems, we don’t actually need to use them. Where we do, should they return creative capacity and impact to our work, we should use them responsibly.
These guidelines and principles we defined together help us to continue to make careful decisions about where and why we use AI, and what we are responsible for if we do use it.
If your are seeking to navigate how to use AI, The Opening Door and With are ready to support the development of your personal Responsible AI Guidelines. We look forward to collaboratively helping our partners ensure they are responsibly employing AI when building brands and experiences for better futures.
They serve to:
Translate values into do‑able behaviours such as keeping a human in the loop, be transparent about AI use, avoid sharing of personally identifiable and confidential information, prefer privacy‑preserving tools, respect IP, mitigate bias, right‑size models for sustainability, and require review for high‑impact automation.
Elevate higher‑risk use cases into an AI Impact Assessment with Director approval, reinforcing that oversight scales with impact.
Normalize disclosure of AI assistance to clients and teammates, and provide non‑AI alternatives on request.
Treat energy use and model sizing as first‑class considerations, not afterthoughts.
Pair policy with training, external references, and internal notes so teams can practice the guidelines.
Emphasize that the guidelines are rooted in With’s co‑design ethos: multi‑disciplinary input, open artifacts, and iterative refinement.
Moving toward Communal Intelligence
As we’ve developed new working models and services within this guideline framework, we believe that there is something more beneficial and transformative than how we understand and use “artificial intelligence” today.
Rooted in the strength and value of our collective knowledge and wisdom, there are approaches to AI that can serve to empower and progress communities and societies, that don’t perpetuate harm or extreme capitalism.
In Brazil, projects led by IBM Research and the University of São Paulo collaborate with Indigenous communities to develop AI-powered writing and language tools aimed at promoting endangered Indigenous languages like Nheengatu. This initiative integrates symbolic and data-driven AI to document and revitalize Indigenous linguistic heritage with strong community involvement.
Open-source AI contributes to faster innovation, transparency, bias mitigation, and decentralization of AI power, enabling communities to create culturally appropriate AI solutions that reflect their values and resist homogenizing big tech influences.
These communal approaches to AI to us built on the benefits we have seen in participatory design methods to create shared prosperity. Subscribe to our journal to get updates as we explore and learn more.
“There is a different way forward. Artificial intelligence doesn’t have to be what it is today. We don’t need to accept the logic of unprecedented scale and consumption to achieve advancement and progress.”
— Karen Hao, Empire of AI
About The Opening Door
The Opening Door (TOD) is an agile full-service responsible artificial intelligence agency, empowering organizations and investors to shape the future strategically and responsibly. As an AI systems and transformation partner, we design, build, and embed AI inside organizations—emphasizing responsible use through literacy, governance, and development. By prioritizing responsible AI practices, we equip organizations with robust, future-ready solutions that not only drive measurable business outcomes but also strengthen brand integrity and stakeholder confidence.
Over the last couple of years our studio started to see the maturity of AI tools turn toward a pivotal point. The adoption of these systems has seemingly permeated every aspect of the tools we use in our field.
Recently, we have seen the consequences of AI proliferation take hold, from how generative models may be impeding cognitive development and worsening mental health, to the impacts the underlying infrastructure powering its usage has on our environment and communities, to the harms AI companies exacerbate through state surveillance and military crises.
As a design studio whose values are rooted in social and climate justice, we recognized that the adoption of systems marketed as “AI” needed to be better understood, both in their potential but also their impact, to assess why, how, and where we may use them.
Where we started
At the start of the year, we hosted internal workshops to document and discuss our uses, concerns, and ideas for how we use AI in our daily lives. If we were to imagine a future in which we use AI for the benefit of both our studio and our partners, we needed to begin in the way we start any work with our partners: Establishing foundational knowledge, sharing our concerns, and imagining potential futures.
Early adopters within our team shared ways they had used AI for everything from AI-assisted search, to transcribing and synthesizing audio recordings, to augmenting image editing.
A few team members were prescient about the concerns about employing contemporary AI, including: privacy and transparency of data shared with AI models, the ecological and community impacts of AI usage, job loss due to increased automation, labour exploitation to help train and grow these systems, and the impact of AI on learning, creativity and critical thinking.
Summed up, we had more questions than answers when it came to the potential application of AI. To figure out how we move forward, we needed to deepen our understanding and develop principles and guidelines for responsible use.
Partnering with The Opening Door
As we often do, we turned to a specialist to support and facilitate this exploration - in this case, Rose Genele. Rose is a responsible AI practitioner and applied AI ethicist with years of experience in the tech industry. She sits on the boards of the Canadian Centre for Ethics and Corporate Policy and Volcano Theatre, and is a member of the International Association for Safe & Ethical AI.
As the organizer of the Toronto chapter of the global All Tech is Human ethical technology community, which I am part of, Rose shared our urgent but cautious perspective on the rapid advancement of AI. Her practice, The Opening Door, seeks to support the responsible development and use of AI systems.
Together, we partnered with the goal of deepening our understanding of ‘AI’ systems and co-developing policies and contractual terms that would guide both our internal practices and services for our client partners, respectively.
Our work together involved numerous activities including pre‑reads and presentations, a collaborative heatmap activity, and post‑session feedback that shaped the guidelines and contractual updates we ultimately co-created.
Through our discussions and facilitated learning, we came away with a deeper, shared understanding of AI as it is understood today and how it affects ourselves, our work, and society at large.
The Mythology of AI
First, the team came to understand what it means when we talk about “AI” - a broad and purposefully opaque term that applies to numerous underlying technologies and practices.
AI today is experienced through closed, large language models like OpenAI’s ChatGPT or Anthropic’s Claude. Those systems are trained on large sets of data, rewarded or penalized for their responses, and most notably are able to simulate human-like conversations.
That same familiarity masks the fact that these models are still simply prediction-based algorithms versus having true intelligence. (Intelligence itself is something not consistently defined.) As they subtly warn you their results, while sounding very confident, can be inaccurate or simply false.
These themes were discussed in ongoing conversations we had at our studio’s annual retreat.
“We call it Intelligence, we say hallucinations. We call these, like, systems, these predictive systems ‘neural networks’ with ‘neurons’. So they're intentionally aligning themselves to kind of like a human replacement in the language.”
— Design Director, With
“I think a lot of my assumptions around AI and how… the industry is actually operating at the moment were not accurate or incomplete.”
— Designer, With
In the end, our general consensus was the promise of these systems - pattern recognition and generative abilities - so far have been outweighed by the costs of maintaining them at scale.
Before we moved forward we needed to understand these impacts more deeply.
The Bad and the Ugly
Reflecting our earlier concerns from an intersectional lens with The Opening Door raised further challenges that were important to name, including:
Environmental and Economic Tensions: The race to build AI infrastructure has been the priority of governments around the world, including here in Canada, for the perceived economic benefits it is suggested to create. But at what cost? The loss of fresh water required to cool data centres alone, even in a country rich with the resource, is a challenge no one has a clear answer to solve.
Creative Labour Exploitation: Individual creative capital is simultaneously threatened by systems that are trained on unknown amounts of stolen intellectual property, while being leveraged to displace the skilled work used to create that original value.
Geopolitical Implications: The data collected by AI models, just as in social media use, can unknowingly be used to further the military application. These systems pose serious ethical concerns as to how our data is used to further conflicts, displacement and in genocide in the world.
It should be noted, many of these concerns are relevant to technology we have used prior to generative AI adoption, like social media. However, the 'AI Infrastructure Gold Rush’ to support the scale of these systems amplifies the impact and severity of these issues as unprecedented amounts of private and public investment is being used to drive this next wave of technology.
As we publish this article, even the markets who were bullish about the potential of AI have started to correct as we approach the trough of disillusionment around AI.
What it means for our partners
As creative professionals, we need to understand where these systems truly do empower us to create impact in the work we do for our partners, but not at the cost of harm to people, their livelihood, or environment.
Following our discussions, we engaged in activities and exercises to determine a set of principles and prohibited uses of AI that we felt would violate those principles.
Additionally, we wanted to ensure we explore concerns around privacy and consent to ensure those were appropriately considered in our partnership agreements.
When we use systems that are fundamentally dependent on the data they have access to, seeking consent, tracking usage and being accountable to how these systems use our data are commitments we have made to our partners to ensure their views and intellectual property are protected.
Those commitments have now become part of our standard parts of our agreements with all our partners. These commitments, like our recent code of ethics, bind both us and our partners to be responsible and transparent about the use of AI in one another’s respective works to ensure the systems we use aren’t benefiting private entities who may repurpose them for their own capital gain.
The protection of ours and our partners’ shared creative works to the benefit of these closed systems is the same reason we ensured our team opted out of our Figma files being used to train Figma’s AI models.
The Guidelines
The outcome of our work together with The Opening Door is an open and living AI Guideline document that we encourage others to review to inform your own responsible AI process.
These guidelines reflect a belief our studio maintained through the entire process: For all the hype around the value of AI systems, we don’t actually need to use them. Where we do, should they return creative capacity and impact to our work, we should use them responsibly.
These guidelines and principles we defined together help us to continue to make careful decisions about where and why we use AI, and what we are responsible for if we do use it.
If your are seeking to navigate how to use AI, The Opening Door and With are ready to support the development of your personal Responsible AI Guidelines. We look forward to collaboratively helping our partners ensure they are responsibly employing AI when building brands and experiences for better futures.
They serve to:
Translate values into do‑able behaviours such as keeping a human in the loop, be transparent about AI use, avoid sharing of personally identifiable and confidential information, prefer privacy‑preserving tools, respect IP, mitigate bias, right‑size models for sustainability, and require review for high‑impact automation.
Elevate higher‑risk use cases into an AI Impact Assessment with Director approval, reinforcing that oversight scales with impact.
Normalize disclosure of AI assistance to clients and teammates, and provide non‑AI alternatives on request.
Treat energy use and model sizing as first‑class considerations, not afterthoughts.
Pair policy with training, external references, and internal notes so teams can practice the guidelines.
Emphasize that the guidelines are rooted in With’s co‑design ethos: multi‑disciplinary input, open artifacts, and iterative refinement.
Moving toward Communal Intelligence
As we’ve developed new working models and services within this guideline framework, we believe that there is something more beneficial and transformative than how we understand and use “artificial intelligence” today.
Rooted in the strength and value of our collective knowledge and wisdom, there are approaches to AI that can serve to empower and progress communities and societies, that don’t perpetuate harm or extreme capitalism.
In Brazil, projects led by IBM Research and the University of São Paulo collaborate with Indigenous communities to develop AI-powered writing and language tools aimed at promoting endangered Indigenous languages like Nheengatu. This initiative integrates symbolic and data-driven AI to document and revitalize Indigenous linguistic heritage with strong community involvement.
Open-source AI contributes to faster innovation, transparency, bias mitigation, and decentralization of AI power, enabling communities to create culturally appropriate AI solutions that reflect their values and resist homogenizing big tech influences.
These communal approaches to AI to us built on the benefits we have seen in participatory design methods to create shared prosperity. Subscribe to our journal to get updates as we explore and learn more.
“There is a different way forward. Artificial intelligence doesn’t have to be what it is today. We don’t need to accept the logic of unprecedented scale and consumption to achieve advancement and progress.”
— Karen Hao, Empire of AI
About The Opening Door
The Opening Door (TOD) is an agile full-service responsible artificial intelligence agency, empowering organizations and investors to shape the future strategically and responsibly. As an AI systems and transformation partner, we design, build, and embed AI inside organizations—emphasizing responsible use through literacy, governance, and development. By prioritizing responsible AI practices, we equip organizations with robust, future-ready solutions that not only drive measurable business outcomes but also strengthen brand integrity and stakeholder confidence.
Over the last couple of years our studio started to see the maturity of AI tools turn toward a pivotal point. The adoption of these systems has seemingly permeated every aspect of the tools we use in our field.
Recently, we have seen the consequences of AI proliferation take hold, from how generative models may be impeding cognitive development and worsening mental health, to the impacts the underlying infrastructure powering its usage has on our environment and communities, to the harms AI companies exacerbate through state surveillance and military crises.
As a design studio whose values are rooted in social and climate justice, we recognized that the adoption of systems marketed as “AI” needed to be better understood, both in their potential but also their impact, to assess why, how, and where we may use them.
Where we started
At the start of the year, we hosted internal workshops to document and discuss our uses, concerns, and ideas for how we use AI in our daily lives. If we were to imagine a future in which we use AI for the benefit of both our studio and our partners, we needed to begin in the way we start any work with our partners: Establishing foundational knowledge, sharing our concerns, and imagining potential futures.
Early adopters within our team shared ways they had used AI for everything from AI-assisted search, to transcribing and synthesizing audio recordings, to augmenting image editing.
A few team members were prescient about the concerns about employing contemporary AI, including: privacy and transparency of data shared with AI models, the ecological and community impacts of AI usage, job loss due to increased automation, labour exploitation to help train and grow these systems, and the impact of AI on learning, creativity and critical thinking.
Summed up, we had more questions than answers when it came to the potential application of AI. To figure out how we move forward, we needed to deepen our understanding and develop principles and guidelines for responsible use.
Partnering with The Opening Door
As we often do, we turned to a specialist to support and facilitate this exploration - in this case, Rose Genele. Rose is a responsible AI practitioner and applied AI ethicist with years of experience in the tech industry. She sits on the boards of the Canadian Centre for Ethics and Corporate Policy and Volcano Theatre, and is a member of the International Association for Safe & Ethical AI.
As the organizer of the Toronto chapter of the global All Tech is Human ethical technology community, which I am part of, Rose shared our urgent but cautious perspective on the rapid advancement of AI. Her practice, The Opening Door, seeks to support the responsible development and use of AI systems.
Together, we partnered with the goal of deepening our understanding of ‘AI’ systems and co-developing policies and contractual terms that would guide both our internal practices and services for our client partners, respectively.
Our work together involved numerous activities including pre‑reads and presentations, a collaborative heatmap activity, and post‑session feedback that shaped the guidelines and contractual updates we ultimately co-created.
Through our discussions and facilitated learning, we came away with a deeper, shared understanding of AI as it is understood today and how it affects ourselves, our work, and society at large.
The Mythology of AI
First, the team came to understand what it means when we talk about “AI” - a broad and purposefully opaque term that applies to numerous underlying technologies and practices.
AI today is experienced through closed, large language models like OpenAI’s ChatGPT or Anthropic’s Claude. Those systems are trained on large sets of data, rewarded or penalized for their responses, and most notably are able to simulate human-like conversations.
That same familiarity masks the fact that these models are still simply prediction-based algorithms versus having true intelligence. (Intelligence itself is something not consistently defined.) As they subtly warn you their results, while sounding very confident, can be inaccurate or simply false.
These themes were discussed in ongoing conversations we had at our studio’s annual retreat.
“We call it Intelligence, we say hallucinations. We call these, like, systems, these predictive systems ‘neural networks’ with ‘neurons’. So they're intentionally aligning themselves to kind of like a human replacement in the language.”
— Design Director, With
“I think a lot of my assumptions around AI and how… the industry is actually operating at the moment were not accurate or incomplete.”
— Designer, With
In the end, our general consensus was the promise of these systems - pattern recognition and generative abilities - so far have been outweighed by the costs of maintaining them at scale.
Before we moved forward we needed to understand these impacts more deeply.
The Bad and the Ugly
Reflecting our earlier concerns from an intersectional lens with The Opening Door raised further challenges that were important to name, including:
Environmental and Economic Tensions: The race to build AI infrastructure has been the priority of governments around the world, including here in Canada, for the perceived economic benefits it is suggested to create. But at what cost? The loss of fresh water required to cool data centres alone, even in a country rich with the resource, is a challenge no one has a clear answer to solve.
Creative Labour Exploitation: Individual creative capital is simultaneously threatened by systems that are trained on unknown amounts of stolen intellectual property, while being leveraged to displace the skilled work used to create that original value.
Geopolitical Implications: The data collected by AI models, just as in social media use, can unknowingly be used to further the military application. These systems pose serious ethical concerns as to how our data is used to further conflicts, displacement and in genocide in the world.
It should be noted, many of these concerns are relevant to technology we have used prior to generative AI adoption, like social media. However, the 'AI Infrastructure Gold Rush’ to support the scale of these systems amplifies the impact and severity of these issues as unprecedented amounts of private and public investment is being used to drive this next wave of technology.
As we publish this article, even the markets who were bullish about the potential of AI have started to correct as we approach the trough of disillusionment around AI.
What it means for our partners
As creative professionals, we need to understand where these systems truly do empower us to create impact in the work we do for our partners, but not at the cost of harm to people, their livelihood, or environment.
Following our discussions, we engaged in activities and exercises to determine a set of principles and prohibited uses of AI that we felt would violate those principles.
Additionally, we wanted to ensure we explore concerns around privacy and consent to ensure those were appropriately considered in our partnership agreements.
When we use systems that are fundamentally dependent on the data they have access to, seeking consent, tracking usage and being accountable to how these systems use our data are commitments we have made to our partners to ensure their views and intellectual property are protected.
Those commitments have now become part of our standard parts of our agreements with all our partners. These commitments, like our recent code of ethics, bind both us and our partners to be responsible and transparent about the use of AI in one another’s respective works to ensure the systems we use aren’t benefiting private entities who may repurpose them for their own capital gain.
The protection of ours and our partners’ shared creative works to the benefit of these closed systems is the same reason we ensured our team opted out of our Figma files being used to train Figma’s AI models.
The Guidelines
The outcome of our work together with The Opening Door is an open and living AI Guideline document that we encourage others to review to inform your own responsible AI process.
These guidelines reflect a belief our studio maintained through the entire process: For all the hype around the value of AI systems, we don’t actually need to use them. Where we do, should they return creative capacity and impact to our work, we should use them responsibly.
These guidelines and principles we defined together help us to continue to make careful decisions about where and why we use AI, and what we are responsible for if we do use it.
If your are seeking to navigate how to use AI, The Opening Door and With are ready to support the development of your personal Responsible AI Guidelines. We look forward to collaboratively helping our partners ensure they are responsibly employing AI when building brands and experiences for better futures.
They serve to:
Translate values into do‑able behaviours such as keeping a human in the loop, be transparent about AI use, avoid sharing of personally identifiable and confidential information, prefer privacy‑preserving tools, respect IP, mitigate bias, right‑size models for sustainability, and require review for high‑impact automation.
Elevate higher‑risk use cases into an AI Impact Assessment with Director approval, reinforcing that oversight scales with impact.
Normalize disclosure of AI assistance to clients and teammates, and provide non‑AI alternatives on request.
Treat energy use and model sizing as first‑class considerations, not afterthoughts.
Pair policy with training, external references, and internal notes so teams can practice the guidelines.
Emphasize that the guidelines are rooted in With’s co‑design ethos: multi‑disciplinary input, open artifacts, and iterative refinement.
Moving toward Communal Intelligence
As we’ve developed new working models and services within this guideline framework, we believe that there is something more beneficial and transformative than how we understand and use “artificial intelligence” today.
Rooted in the strength and value of our collective knowledge and wisdom, there are approaches to AI that can serve to empower and progress communities and societies, that don’t perpetuate harm or extreme capitalism.
In Brazil, projects led by IBM Research and the University of São Paulo collaborate with Indigenous communities to develop AI-powered writing and language tools aimed at promoting endangered Indigenous languages like Nheengatu. This initiative integrates symbolic and data-driven AI to document and revitalize Indigenous linguistic heritage with strong community involvement.
Open-source AI contributes to faster innovation, transparency, bias mitigation, and decentralization of AI power, enabling communities to create culturally appropriate AI solutions that reflect their values and resist homogenizing big tech influences.
These communal approaches to AI to us built on the benefits we have seen in participatory design methods to create shared prosperity. Subscribe to our journal to get updates as we explore and learn more.
“There is a different way forward. Artificial intelligence doesn’t have to be what it is today. We don’t need to accept the logic of unprecedented scale and consumption to achieve advancement and progress.”
— Karen Hao, Empire of AI
About The Opening Door
The Opening Door (TOD) is an agile full-service responsible artificial intelligence agency, empowering organizations and investors to shape the future strategically and responsibly. As an AI systems and transformation partner, we design, build, and embed AI inside organizations—emphasizing responsible use through literacy, governance, and development. By prioritizing responsible AI practices, we equip organizations with robust, future-ready solutions that not only drive measurable business outcomes but also strengthen brand integrity and stakeholder confidence.
——
Subscribe to the With Journal
Get our news, reflections and event updates straight to your inbox.



