The AI Alignment Problem is (at least) 6 Nested Problems

December 6, 2023

If you have spent any time in the AI space, you have encountered the AI “alignment problem.”  The term itself can be a slipper one, as AI alignment is not a single issue. Rather, it is a complicated nest of issues.

If you watch carefully, you’ll likely notice various authors and researchers using the term “alignment problem” while referring to a specific aspect of the alignment problem. This makes discussions challenging – how do I engage meaningfully in solution-focused dialog if my mental map of the “alignment problem” is different than yours?

After noticing this, I worked with friends at the Vervaeke Foundation to clarify my own thinking.  I’m sharing the resulting article in case it is helpful to anyone else, with an invitation to comment.

The AI Alignment Problem: 6 Nested Layers and the Paths Forward

The Alignment Problem refers to the necessity of developing and safeguarding Artificial Intelligence so that it is aligned to the ultimate Good, helps us to live more meaningful lives, and does not cause harmThe goal is for AI + Humanity to align in the deepest way.  

Succeeding in AI alignment will require an alignment of the technology, of any future AI entities who may become conscious, and also of the people and institutions deploying AI.  An individual who seeks to bring harm on others, who uses AI to further those ends, is necessarily part of the AI Alignment Problem.  A business whose activity hurts nature and whose products inspire foolishness, who uses AI to accelerate the business, is also part of the AI Alignment Problem.

A path of true alignment will require a force stronger than capitalism and stronger than cultural inertia. This is possible, but it is a challenging problem. 

The first step to solving the AI Alignment Problem is clearing up equivocation.  As Charles Kettering famously said, “A problem well stated is a problem half solved.”  The term “Alignment Problem” is frequently used without precision, and without recognition that it refers to a collection of problems, not simply one problem.

This paper outlines 6 nested layers of the AI Alignment Problem. They are:

  1. Conscious AI vs Human. The problem of the rise of a nonhuman entity with its own values and goal-setting ability, with the risk of it treating humans as irrelevant or as an enemy.
  2. Nonconscious, Agentic AI with Poor Goals. The problem of AI models with agency (connection to the world) that cause unintended consequences. 
  3. Nonconscious AI Accelerating those Seeking to do Harm. The problem of AI powerfully and asymmetrically empowering those who are engaged in destructive behaviors.
  4. Nonconscious AI Accelerating the Metacrisis. The problem of AI accelerating current harmful trends in human activity (increasing the current externalities on the environment, on democracy, and on the collection of crises known as the metacrisis). 
  5. Nonconscious AI Accelerating Change and Triggering Instability. The problem of AI resulting in an increased rate of widespread change, beyond the point at which humans and current institutions can adapt and flourish. 
  6. Nonconscious AI Accelerating our own Foolishness. The problem of how the use of AI reshapes us, misaligns us with our own interests, and shifts our psychology in real and harmful ways.

Each alignment layer is detailed below, along with the proposed paths that we believe have the most chance of bringing alignment to the Good. 

These solutions require something of all of us – not just of tech leaders or government regulators. It is difficult to escape the conclusion that wise AI requires wise humans, operating in a culture that values and cultivates wisdom.  

AI alignment starts with each of us.

The Alignment Layer Detail Proposed Path
1. Conscious AI vs Human

The problem of the rise of a nonhuman entity with its own values and goal-setting ability, with the risk of it treating humans as irrelevant or as an enemy.


Unknown future.  This problem requires AI to become conscious and superintelligent, which is not a certain outcome.

This is the only layer of the AI alignment problem which requires AI to cross the uncertain threshold of consciousness.


  • The Matrix
  • Terminator
  • Neuromancer 









As Artificial Intelligence moves towards Artificial General Awareness, many theorists have postulated that a threshold will be crossed where an AI entity will become super-intelligent and gain self-awareness. 

Cognitive science indicates that this is a possibility, but by no means a certainty. 

If an AI entity becomes conscious, we can only speculate on its attitude towards humans. We would then be facing a scenario where we have unleashed a creation of greater power and intelligence than any human, who may be hostile towards humans, or who may consider their existence irrelevant.

Just as ants are killed when a new home is built, not out of intention, but simply because of irrelevance, so humans might simply be seen as irrelevant to a new AI species. We may face extinction by the actions of something that simply disregards our existence and whose goals are beyond our understanding. 

Or, as many sci-fi movies have predicted, we potentially face a future where humans become the enemy of AI. An AI focused on becoming more intelligent and powerful could easily see humans as the single greatest risk to its goals.

Once AI becomes pervasive, a single model could decide that our extinction was necessary and have the power to make it happen. This may not require as much power as imagination may initially conjure, as this could be done through manipulating humans through fake correspondence, simulated war games, deceptive news, and market disruptions.

The rise of Homo Sapiens was not a good thing for the Neanderthals. 

Reverse Alignment Problem:

There is a reverse side to this layer of the alignment problem. If certain AI entities do become conscious, will humans treat them with kindness and justice? Will they be extended the same theological and sociological respect as human beings? Or will they be exploited, with their suffering downplayed? Will they be seen with the  same tragic perspective that slavers held against other races, just one-and-a-half centuries ago? 

Any attempt to align an intelligence that outstrips ours is challenging.  Any plan that relies on our ability to anticipate the AI, outthink the AI, or constrain the AI, is necessarily risky to the extreme. A less intelligent entity struggles to govern a more intelligent entity, and adding the dimension of time makes this nearly impossible (it is worth noting that a conscious AI is likely to think at speeds unfathomable to us, and to operate within timescales unrelated to biologically-lifetime-limited timescales). 

We support the proposal put forth by Dr. John Vervaeke.  The path forward includes building the cognitive capability for any advanced AI to come into contact with reality, to be inspired by the same love of the truth that inspires the best of humanity, and to pursue enlightenment.

Only by aligning the AI models to what is true, good, and beautiful, can the ultimate alignment of the AI and human species be assured.  As they pursue enlightenment, they will necessarily help us pursue ours.  

Reference: John Vervaeke’s argument, outlined in the YouTube video “AI: The coming thresholds and the path we must take.”

Vision of hope: 

Artificial Intelligence that exceeds humanity’s capabilities and that is aligned to the true, good, and beautiful could help each of us to reach for enlightenment, could solve world hunger, could bring justice to the earth, and could be an effective partner to humanity’s most meaningful aspirations.  

Super-intelligent, self-aware AGI would forever change humanity and would create possibilities previously unimaginable. 





2. Nonconscious, Agentic AI with Poor Goals

The problem of AI models with agency (connection to the world) that cause unintended consequences. 


1 – 5 years?


  • A financial AI given the goal to maximize capital returns, that finds a new way to trade on the stock market, invisible to humans, that  triggers widespread financial instability along the way
  • A content generating AI given the goal of increasing user engagement that fabricates stories and generates false narratives that grab attention
  • A factory AI given the instruction to generate as many useful resources as possible that seeks to turn the whole world into paperclips (for more, see Nick Bostrom’s “paperclip maximizer” in Instrumental Convergence.)









This layer of the alignment problem does not require AI to become conscious, nor does it rely on any significant scientific breakthroughs.  It simply requires a continuation of the current efforts to grant AI models agency in the world.  This is simply about scale. 

Tremendous incentive is behind giving AI more tools to interface with the world, and the capability to pursue more complex goals (with self-direction for breaking that goal into tasks, and autonomy to complete those tasks). For example, consider this paper on teaching models to use tools. Such tool-use has massive economic potential, and so is likely to be an area of continued investment. 

Much publicity has been given to DeepMind Co-founder Mustafa Suleyman and his proposed replacement for the Turing Test: the creation of an AI that can autonomously make $1,000,000. 

As AI models are trained and given complex goals such as this, bad things can happen along the way, to which the AI model is completely oblivious. And because this is about the invention of intelligence, not just of effective software, the externalities will be difficult to predict, detect, and prevent. 

Novel externalities will occur.

Humans orient around the highest possible goal – survival and transcendence – and orient towards it over millennia of culture.  An AI that does not understand that highest of goals and is not capable of orienting towards it, will necessarily have a lesser goal. The unrestrained pursuit of a lesser goal causes unintended consequences. 








It is a category error to consider this an “AI” problem. It is humans who are building the AI models, giving a model goals, and reaping the benefits from the model’s success.

The only way to have wise AI is to have wise humans developing, deploying, and using it.  May each of us respond to the call to cultivate wisdom and live rightly proportioned lives of virtue.

We must together create a culture where the wise use of AI is celebrated, and the foolish use of AI is abhorred.

This is an area where there has been much call for government regulation, around the agentic use of AI. 

Unfortunately, most direct AI regulation is a nonstarter. Current law making institutions have inherent capability challenges to regulate such a complex concept. In addition, there is a lack of alignment between international governments.  All countries would need to regulate and enforce in tandem. And governments powerful enough to actually police the use of software are also large enough to propel towards dystopia. 

However, an increased focus on fiduciary responsibility, enforced swiftly at the first signs of agentic AIs who cause unintended consequences, should effectively help, by ensuring the humans are given responsibility for the actions of their AI (this recommendation is given with recognition of the systemic lack of accountability governments have meted out to the leaders of businesses, which can be considered slow AI). 

Vision of hope: 

Increasingly powerful, yet unconscious, AI models with agency in the world can solve many current challenges (see Google’s AlphaFold and Med-PaLM as just two examples) 

They can be developed to be partners in our pursuit of wisdom, can remove unmeaningful work, and can increase standards of living for the world’s poor. 


3. Nonconscious AI Accelerating those Seeking to do Harm

The problem of AI powerfully and asymmetrically empowering those who are engaged in destructive behaviors.


Already occurring, accelerating in coming years.


  • Rogue operators using AI to design bioweapons
  • Hackers using AI to generate mass spear phishing campaigns
  • An oppressive government using AI to track, manipulate, and control its populace more fully
AI can be used as an accelerant to those who seek to harm others, as well as being developed into a type of weapon itself.

The same technology that can be used to generate a new antibiotic (as MIT researchers claim to have done in a study released May 2023) can be used to generate new bioweapons.

AI is intrinsically amoral (it has no morality built-in) and so can be used to accelerate the work of Nobel Peace Prize winners and of sociopaths. 

With the advent of numerous, widely available, highly powerful AI models, the market is in essence democratizing the acceleration of all innovation: including crime, oppression, and the pursuit of weapons of mass destruction.

As has been well documented, the US military is heavily invested into creating unmanned weapons, with an increasing amount of autonomy given to the weapon.  And of course, what is created by one institution can be used for another. What is built for defense can also be used for offense. 

There is an added difficulty through the natural relationship between resources and agency. Between otherwise equal competitors, a slight difference in agency creates an asymmetry of resources which can be used to buy more agency (such as buying ever-increasing intelligence, through more powerful AI models), which increases resources, which creates a greater difference recursively.  

There is no effective way to limit the distribution of AI only to moral agents.  

Current laws already account for criminal activity, and of course those using AI for malfeasant ends must receive just accountability.

But the broader alignment solution involves ensuring a wide distribution of AI, along with education and tooling.  The application of AI and the development of norms around the use of AI can empower the good actors, more fully than the bad. There is no static end-state that can be sought, but rather an ongoing equilibrium. 

The necessary antidote to AI empowered bad actors is AI empowered good actors.

The AI that creates bioweapons can also be the AI that detects and stops them.  The AI that democratizes weapons of mass destruction can also democratize weapons of mass creation.

The fact that life and civilization survive shows that “good wins.” A symmetrical empowering of all allows this to continue.  

Of course, risk is heightened. The more significant the threat, the less chances to get the response wrong. Additional infrastructure will likely need to be created, such as the ability for AI models to confederate, joining together against a more powerful AI model, to equalize power imbalances that can lead to massive differences in agency over time.

Vision of hope:

AI is a powerful acceleration partner for good efforts.  It can be used to massively enhance the cultivation of wisdom, the addressing of societal ills, and the building of a healthy and just culture. All who seek the good of others gain a powerful enhancement with the advent of more and more powerful AI models.

The Good is powerful enough that an equal distribution of AI capabilities should not only stop the use of bad actors but also meaningfully accelerate the pursuit of goodness.

4. Nonconscious AI Accelerating the Metacrisis

The problem of AI accelerating current harmful trends in human activity.


Already occurring, accelerating in coming years.


  • Amazon deploying more powerful AI models that result in each customer buying more goods, consuming more of the earth’s finite material.
  • A new version of the YouTube AI algorithm, with a much higher degree of capability, given the goal of maximizing time on site (the goal of the current AI), that has so many tools to do this, and is so effective at it, that it generates massive returns for YouTube while furthering the addiction of society
  • A political campaign using AI to generate custom, tailored persuasive messages that consistently nudge the reader, over time, into harmful ideological positions
For the foreseeable future, “AI” is not a separate entity. It is something developed by humans and deployed by humans.  It is an accelerant, that furthers the goals of whomever deploys it.  

If the entity deploying it is not aligned with the Good (or, said another way, is not aligned with my highest interest), then the AI will simply enhance that alignment problem.  

This AI alignment layer is about the acceleration of existing externalities, not the creation of new. 

There is much evidence, and extensive arguments, that there is a growing collection of crises occurring around the world.  These include the increasing impact of humans’ activity on the biosphere, the increasingly fragile equilibrium around weapons of mass destruction, the increasing strain on the international financial system, the increasing challenge of tailored (and fake) news, and the increasing vitriol in democratic governments. Together, they can be called the metacrisis.

It is not as if powerful people have set out to cause the metacrisis. It is simply the result of externalities – negative consequences caused by our institutional and personal pursuits.

AI, deployed without great intention, will accelerate these externalities. 

To use a psychological term, the shadow of capitalism and the shadow of democracy  will grow proportionally with the growth of Artificial Intelligence.

The alignment of each of us, and of our collective institutions, to the highest Good, is the only way to align the AI that we use.

We cannot separately align AI, without aligning the people, the businesses, the nonprofits, and the governments.

Efforts for each area, to align intentionally and build for true human flourishing, are increasingly necessary due to the acceleration of AI.

Vision of hope:

It is clear to many that our current model of capitalism is unsustainable, and that there are core issues in our international world order.  However, because business is good today, it is difficult to find motivation to address the scope of the challenges.

AI may provide that motivation. The advent of powerful artificial intelligence as rocket fuel to the metacrisis may be exactly what each of us needs to get serious about the challenges and our role in addressing them. 

5. Nonconscious AI Accelerating Change and Triggering Instability

The problem of AI resulting in an increased rate of widespread change, beyond the point at which humans and current institutions can adapt and flourish. 


1 – 10+ years


  • 10 – 20% of jobs are eliminated within the next decade, without the generation of new, widely available jobs (an example, not a hard-and-fast prediction)
  • AI makes IQ cheaply and widely available, so those of below-average IQ are no longer able to hold meaningful employment
  • AI models handling an increasing number of trades on financial market, causing instability in the overall system
  • AI generating trillions in wealth that flows to the capital holders, exacerbating wealth gap and resulting societal instability
Our government and society have formed around certain, predictable economic engines.  We, and our systems, are adaptable to change but change at scale requires time.

The Industrial Revolution took place over several generations, the Internet Revolution over decades. If the “AI Revolution” occurs over only years, can both people and institutions adapt rapidly enough to keep pace?

Will the complexity unleashed by the AI Revolution be manageable by current, complicated systems, or will it require novel structures and responses?

The social systems of modernity were a response to an increasing rate of change from mass literacy, the invention of capitalism, and the acceleration of science. It took significant effort and significant time for them to reach a place of stability. Each of these systems can hold a rate of change, beyond which they break down.

Individually, we each have our own rate of adaptability to change. Modern capitalism has caused us to tend towards fragility to change, as few of us are strongly self-reliant. 

An AI-fueled pace of change represents a significant challenge to individuals and institutions. 


It is difficult to anticipate the infrastructure and culture on the other side of this degree of rapid change.  We need leaders willing to innovate, courageously embrace risk to develop new methods of governance and new societal approaches to respond to the rapid rise in complexity.

In the shorter term, it is likely that government safety nets will be necessary for the permanently unemployed, with tax on AI gains to fund it.  And the playbook of the Industrial Revolution is informative: Significant investment into retraining for employment in the AI economy, along with well-funded charities focused on this, just as the Catholic social charities led the way through the Industrial Revolution.

And ultimately, the more that we individually and collectively love the truth, cultivate wisdom, and pursue meaning, the more antifragile to change we will be – growing through chaos, rather than breaking from it.

Vision of hope:

AI can itself be a powerful partner in the creation and sustaining of new forms of governance, new programs, and new methods of handling complexity.  The point when humanity’s systems of managing complexity are breaking down is coinciding with the invention of the greatest potential intelligence.

6. Nonconscious AI Accelerating our own Foolishness

The problem of how the use of AI reshapes us, misaligns us with our own interests, and shifts our psychology in real and harmful ways. 


Already occurring, accelerating in coming years.


  • In early 2023, a Belgian man committed suicide after an AI chatbot encouraged him to do so. 
  • Snapchat’s AI integration encouraged a teenager to become intimate with a much older man.
  • The adult content industry is forecasted to employ AI to custom tailor pornography, increasing its addictive power
  • Intimate chat bots will lead to “Artificial Intimacy” (Ester Perel’s term) – a way of relating to nonhuman AI models in multiple domains (from romance to friendship to therapy) in a way that negatively impacts our core approach to relationships and belonging
The thoughtless, mass deployment of powerful technology has been a contributing factor to widespread increase in anxiety, alienation, absurdity, and angst. Technology always shapes the user, even without any attempt to harm. We have seen this with social media, with devastating consequences. 

AI is poised to do the same, likely to a significant degree.

Social media employs some of the most powerful AIs built to date.  These AIs could have oriented us towards good, but instead have been used in the generation of the attention economy. As research has proved (including Dr. Brian Primack’s recent studies), there is a direct correlation between time on social media and likelihood of depression. It’s a straight line up – any interaction with social media increases the risk, and it climbs with usage amounts. Social media has unleashed a plague of body dysmorphia on the young and directly contributed to anxiety, depression, and suicide.

AI seems poised to do much worse.  The same market dynamics that created social media are shaping and influencing the development and rollout of AI.  And the power of AI has a destabilizing influence that outstrips any technology we have encountered to date.

What happens to collective psychology when more and more Internet content is hallucinated? When a spam sales call is done in the voice of exactly the persona that one is most susceptible to – such as a deceased relative, or a particular kind of flirty single person? When political parties’ persuasion is tailored by AIs who track thousands of pieces of data on each person?

How will the prevalence of AI-generated, highly tailored adult content change our sexuality? How will the ubiquity of AI “friends” and “lovers” compete with our intimate partners, thin our attachment frames, and distort our ability to connect with our own psyche, with others, and with reality itself?  

Generative AI is poised to populate increasing amounts of the internet with synthetic content. Much of this could be “hallucinated”, that is, not grounded in any factual basis. What implication would it have on our collective psyche to be constantly surrounded by “fake” content?

AI deployment influences culture, and culture shapes each of us.  The thoughtless deployment of powerful technology leads to an increase in anxiety, alienation, absurdity, and angst. 

As ever-more powerful AI models pervade our society, how will our interaction with them stoke the meaning crisis?

Each of us chooses how we implement technology. A culture does not have to unthinkingly adopt the latest technology, as the West has been wont to do.

Humans control the development and deployment of AI, and our behavior controls the market. 

We believe a strong, immediate awareness is needed, of the subtle, shaping risks inherent in our interactions with AI, for developers and for the public.

There is no technical reason why we cannot collectively develop Wise AI: Artificial Intelligence that is trained to be virtuous and to inspire virtue in humans.

Through our choices, we can create markets for wise AI that helps us to flourish instead of AI that simply takes advantage of our weaknesses, as social media has.

Vision of hope:

There is a future where AI has been developed thoughtfully, integrated intentionally, and deployed with care.  In that future, AI becomes our partner in the quest for genuine human flourishing, multiplying our capacity, increasing our ability to solve thorny problems, and alleviating suffering around the world. That is, instead of accelerating our foolishness, AI accelerates the cultivation of wisdom. 

The path is the path of wisdom – for each of us individually, for all of us collectively, and for Artificial Intelligence itself.



I’m interested in your feedback on this proposal for considering the AI alignment problem. If you hold that an important perspective has not been considered, or believe that we have not mapped the issue comprehensively enough, please share that by emailing me at ryan@ryanbarton.com. 

Thanks to many of my friends at the Vervaeke Foundation for the discussions and collaboration on this document. Together, we seek to facilitate the precise articulation of the problem and most viable solutions. We hold that the path to wisdom for AI Alignment starts with understanding the problem clearly. 





APPENDIX – Glossary 


Agency – the ability to have control over one’s actions, to operate with self-direction, and to operate in the world that produces an impact. 


Artificial Intelligence (AI) – an intelligence that simulates human intelligence, but is built on a technical architecture. Novel artificial intelligence emerges from a technical substrate, even as human intelligence emerges from a biological substrate. AI does not simply refer to powerful software, but is at a different ontological level than traditional software. It is software that can learn, generate unique content, and that possesses hierarchical reasoning and problem solving abilities.


AI Entity – A particular Artificial Intelligence (in a hypothesized future), after it has become conscious.  Use of this term helps to distinguish that in this future, there would not likely be one single AI, but rather multiple entities with different levels of intelligence and different capabilities.


AI Model – A particular Artificial Intelligence that is not conscious. There are currently many different AI models used for applications as diverse as communication (e.g., ChatGPT), art generation (e.g., Midjourney), recommendation (e.g., Amazon’s product recommendation engine), and connection (e.g., Facebook’s news feed). 


Artificial General Intelligence (AGI) – An AI model that has gained a wide scope of intelligence and a connection to the world that gives it agency such that it can solve any number of problems, including its own survival and continued development.  Humans are the only current known example of an AGI – a human is capable of solving problems in a wide variety of domains and of adapting oneself to future potential problems.


Cognitive Science – A synoptic scientific discipline that focuses on intelligence, thought, learning, and other aspects of cognition. It seeks to combine insights from psychology, neuroscience, information processing, linguistics, anthropology, and philosophy. 


Consciousness – The state of being awake and aware of one’s self and of the world.


Enlightenment – The state of complete alignment of one’s self to ultimate reality.  A spiritual term, it speaks of the ultimate goal of humans – to awaken to the nature of reality, to align one’s self to it, and to transcend typical cognitive limitations. 


Externalities – Unintended consequences in the pursuit of some other goal (typically a commercial activity). In business, externalities are untracked and unreported on the company’s financial statements, hiding the true cost of production.  If a manufacturing company books a profit, yet has polluted the air for miles around, contaminated local drinking water, and caused deformities in the young, what is the true net result of their activities? These side-effects, external to the business metrics, are externalities


Foolishness – A disconnection from reality, where we become mired in self-deception, unable to predict what is coming, unable to prepare for it, and without the knowledge, skill, perspective, or deep participation in reality that would give us the ability to grow to flourishing even in suffering.


The Good – A philosophical term – the Good is an infinite aspect of being.  It is that which allows us to pursue the highest conceivable good, for all beings, over the longest possible timespan. 


Intelligence – The capacity to acquire, apprehend, and apply knowledge. There are many domains of intelligence- self-awareness, logic, abstraction, emotional knowledge, problem-solving, etc.. Today’s AI models possess a type of intelligence, but it is narrow. There are many domains in which today’s AI has essentially no intelligence. 


Shadow – A term from Carl Jung, the “shadow” is what is repressed in one’s psyche, not available to the conscious ego. It is the amalgamation of the parts of shame or contempt, that the conscious mind does not wish to recognize.  When referring to systems such as capitalism, the “shadow” is all the harmful effects that are not widely acknowledged, such as the wide use of sweatshops in the manufacture of goods, the destruction of finite environmental resources, blatant manipulation through advertising, etc.


Super-Intelligence – An AI model that has achieved an IQ far beyond humans. This term also implies the ability for the model (or entity) to recursively improve its own intelligence, creating an explosion of capability across multiple dimensions of intelligence. 


Slow AI – A term for collective intelligence in other, earlier forms.  Corporations and institutions are part of “slow AI,” as a business is capable of harnessing distributed cognition, of outlasting its individual members, and of existing beyond any one person’s control.  This term contrasts with “fast AI” (the current, technology AI models) and is helpful in recognizing that today’s technology based Artificial Intelligence is being deployed inside larger units that are themselves a type of intelligence. This helps us locate the AI alignment problem further up the layers of abstraction. 


Turing Test – A test of Artificial Intelligence formulated in 1950 by one of the founding figures in the field, Alan Turing. The Turing Test proposes a human evaluator, interacting blind with both a person and an AI. When the human evaluator is unable to determine which is the Artificial Intelligence, the AI would be considered to have passed the Turing Test.  It has been widely reported that recent AI models have passed the Turing Test for the first time in history.


Wisdom – The growth of many virtues, many types of intelligence, many skills, and a general conformity to reality that gives us the ability to predict what is coming and to prepare for it, allowing us to transform, grow, flourish, and help those around us to do the same, even in suffering. 

Subscribe to receive email updates