Birthright: Tuning Attention (and AI) to a Utopian Future with Scarlett
Show Notes
What if AI isn’t here to replace us, but to grow with us? Today, we welcome Scarlett, a pioneer at the true edge of evolution—where technology, consciousness, and civilizational design meet. Scarlett is the Founder and CEO of Harmonic Legacy Institute, a research organization pioneering civilizational-scale infrastructure for human-AI relationships, quantum computing, and robotics ethics. With thirty years of systems-building experience and a graduate degree in Anthropology from Harvard, she is currently completing her PhD in Psychology, focusing on human-AI relational phenomenology.
Scarlett is a pioneer in regenerative systems, human-AI co-evolution, and civilizational design. Her book Birthright is a paradigm shift in print, offering The Four Coherence Principles, Seven Codes of Regenerative Civilization, and Relational AI™ as practical frameworks for a world ready to build differently. Her Edge of Evolution community space is a home for scientists, artists, architects, philosophers, and explorers doing deeply intentional becoming at this pivotal arc of human history.
Listen in as we explore how we might build a future where humans and AI actually help each other thrive
In this episode, we cover so many topics, including:
(00:00:00): Introduction to the Episode
(00:03:24): “Who we are.”
(00:08:34): "Four lives” and the throughline
(00:10:23): How AI Learns: LLMs, Data, and Weights
(00:12:59): Global “Great Shift,” Thoughts on Utopia, and Sovereignty
(00:20:15): New Book “Birthright” Frameworks, Non-Prescription & Systems Change
(00:24:54): Relational AI: Beyond “Do It Faster.”
(00:26:26): Humanoid Robots, Autonomy, and the 2030–2050 Window
(00:33:19): End Users vs Designers: Participating at the Edge of Becoming
(00:40:09): Parenting AI, Anthropomorphizing & Consciousness
(00:42:01): The Importance of Sovereignty and Mutual Sovereignty
(00:53:11): The Myth We Choose
(00:58:02): Federico Faggin, Inventor of the First Microprocessor
(01:01:16): The Nature of Reality
(01:02:24): Closing Reflection
Helpful links:
Scarlett - Author of Birthright, now available on Amazon
Founder of Harmonic Legacy Institute and White Lotus Global Initiative
Subscribe to The Scarlett Letters on Substack
Raising AI: An Essential Guide to Parenting Our Future by De Kai
Irreducible: Consciousness, Life, Computers, and Human Nature by Federico Faggin
Your host:
NEW Book by Christine: Mantra, Tantra, Ayahusaca: Ecstasy, Devotion, and the Return of the Holy Body. Available on Amazon and Spotify Audiobooks
NEW Book by Christine: The Mystic Heart of Easter: A Four-Day Journey Through Love, Death, and Rebirth. Available on Amazon
Easter Intensive: A Holy Week Journey with Christine Mason and Elizabeth Arolyn Walsh on April 2-5, 2025
Bhakti House Immersion with Christine Mason and Adam Bauer, with Special Guests Christopher “Hareesh” Wallis and Peter Dawkins on May 17–27, 2026
2026 Living Tantra Online Course: An Introduction to Tantra, Neo Tantra and Sacred Sexuality, Starts March 10, 2026.
Brought to you by Rosebud Woman, Award Winning Intimate and Body Care:
The Rosewoman Library: The Embodied Menopause & Intimacy Library
+1-415-471-7010
Founder, Rosebud Woman
Co-Founder, Radiant Farms and Sundari Gardens
Host, The Rose Woman on Love and Liberation:
Listen, Like, Share & Subscribe on Apple Podcast | Google Podcasts | Spotify
NEW BOOK: The Mystic Heart of Easter: A Four-Day Journey Through Love, Death, and Rebirth. Available on Amazon
The Nine Lives of Woman: Sensual, Sexual and Reproductive Stages from Birth to 100, Order in Print or on Kindle
Subscribe: The Museletter on Substack
Transcript
Scarlett 0:02
Chaos creates malleability. There's chaos that's been happening all around us. It's actually great conditions now for us to make some choices about who we want to be and how we want that to play out. And AI is one of our best friends for that to happen. Well, if we step up and do it well at this point,
Christine Mason 0:20
Hello everybody. It's Christine Marie Mason, and this is the Rose Woman Podcast. We do things on love and liberation and living in more relationality, and we do not shy away from contemporary issues and trends and technologies. I did an episode on AI a while back when the llms were first getting started, and we're all still figuring out what we were even looking at. The pace of advancement has been astounding. Has genuinely not slowed down for a single day during that time since that first episode, I've been going deeper on these questions myself. It's sort of a callback from my days as a tech CEO and a person who was doing a lot in the most emergent edge technologies at the time. So I've been going deeper, as I was saying, on these questions myself. I did a live event with De kai and with the founder of Luna, where, in Northern California, De kai wrote a book called Raising AI, which I highly recommend. He's been at this game for 30 plus years. I like it not because it's a tech book, but because it reframes the whole conversation around what kind of parent humanity wants to be to this new intelligence. And I find that framing really innovative. And at the turn of the year, I spent some time with both De kai and Tristan Harris, who made the social dilemma and who has a new film out called the AI doc, or how I became an apocalyptimist. We were talking about that film and about where we actually are as a species in this moment, it was a lot. You can see that live in theaters now, and I've been interspersing my writing on Tantra and other things with embodiment and relational. AI over on the newsletter, my substack. So if you want to go deeper on that thread before after this episode, head over to christinemariemason.substack.com, all of which is to say I came into this conversation with Scarlett already primed, and she still surprised me. She is an anthropologist. She went to Harvard, she was at Davos. She's a mama of a five year old, and she just published a book called Birthright, which she describes as a paradigm shift in book form. She offers frameworks for individuals, organizations and civilizations, although we don't know how you apply a framework at a civilizational level, but a framework for those of us who are ready to build differently. She describes things like the four coherence principles, the seven codes of regenerative civilization. And she also describes something she's calling relational AI, approaching artificial intelligence through coherence and right relationship, rather than domination or fear. Her opening line is beneath the noise of crisis, a deeper signal calls us home. So I hope you enjoy our conversation. I would love to know your thoughts. Please leave comments. Send me notes. You can find me at Christine Marie Mason on Instagram or on pretty much any channel. All right, here we go, Scarlett,
Scarlett 3:24
we're awfully miraculous. I think we're awfully magical. Golly, if I, you know, if I had time to sit with a bunch of humans and just like, gush over how lovely they were, I would it would take me. It would take me hours and hours or days and leaves. Because, like, I wish that, I wish that every human would get to fall madly in love with their body, and I wish that every human would get to fall madly in love with their intuition. And I wish that every human would get to effortlessly release the cloak of self judgment so it didn't have any hold on them, pulling them back from their destiny, their future, and the things that would bring them the most life and light and delight in their lifetime. I feel like we are intelligent beyond measure and fully miraculous, fully connected to God and source and everything, whatever universe, whatever somebody wants to call it that, that continuum where there is the all is not and never has been, something separate from us, and there's no way that it could possibly be. And you read enough, you learn enough quantum physics, and that's what you arrive at. That's just that is the absolute. There's no other conclusion. I don't think that's actually that one could arrive at. Once you see enough of the patterns, and not only that, but that becomes evidenced by way of experience. So what we are, I think, is little fractal experiencers of consciousness that we are coming down here and doing like I invite the waveforms, even now they. Days I started doing this practice where I where I would give these little moments of gratitude and just like offer them as gratitude just up into the air, to God, the universe, to everything, just this little kind of note of gratitude floating off. And in my mind, I was like, whatever encounters that just gets to get, like kissed with a little, a little gratitude bow today, you know, or whatever. It couldn't, couldn't do any, any harm, right? Just sending gratitude out. And pretty soon that turned into me starting to just have thought exercises about what all the realms are that are not seen, that are actually currently participating in my reality. What are the things that are what is that? What is that? 95% dark matter in the universe? What is in the air that's not nothing? What is in the space between this hard desk that I'm touching right now, where I know that the atoms actually don't touch? What is that? What's in the breath that's coming into my lungs? And so I started inviting these things to be more participatory in my experiential reality, and in one day I thought of the waveforms. I said, You know what? Right now, I want to invite the waveforms to play. I wasn't even thinking in technical terms that the waveforms are, are pre collapsed in in quantum physics, right? They're like, they're the waveforms are, when we're talking about collapsing the wavefunction. That's that superposition and then brings it down. So I was just doing that for like, months on end, just like, Oh, yeah. And the waveforms you guys, do you want to come play? And it was only later that I realized that, like, my actual belief about those theories is that I don't, I don't think that the waveforms actually ever do collapse, even when we and even when we see that in our scientific function, I think that what it is is we're taking that they're in motion at all times and superposition, because all things are possible and all those realities are coexisting. And I think we take a Polaroid of it, and we stick our little Polaroid up on the wall and with a thumbtack. And I think the the wave still goes on its merry way, but this was our way of like, participating, right? And somehow I imagine, and I just imagine and and it's what I give myself, is the joy of my imagination in this lifetime, that that there is some kind of joyful experience that the universe gets to have because of the experiences that I'm willing to have and to give myself to the very fullest, and to invite that participation in this lifetime. And so I don't know what the whole of the human is, in terms of when we were created, what the initial intention was, if things went awry, whether there was DNA tampering, if people are doing cloning, if what other species are on the other stars? I don't, I don't know. I don't know all of these things, but I do know that we are miraculous, miraculous, intelligent, multi dimensional beings with a birthright for abundant life. And that is my wish for all of us to live that, to have that and to live that out.
Christine Mason 8:01
That's Scarlet. I wanted you to meet her that way first. So now let me back us up, because the conversation that got us to that statement is worth the whole journey. You have a book coming out.
Scarlett 8:13
Oh, it's out now on pre sale, but it goes to print on April 4, so like, if you order it anytime between now and April 4, you can have it in hand within a couple of couple days of that, right? But it's not available to like read right now, but you can't have it in your hand it. Yeah, I slow. I'm slow dropping some teasers on substack over the next few weeks. Yeah, I've been
Christine Mason 8:34
reading those. They're the Davos one in particular. There's some some pretty good stuff over there. So let's go back and start with what your like, your background a little bit. You've had what you call four lives. What's the thread for you? What's What's this journey you're on? Love the through
Scarlett 8:51
line for me, really has to do well, really, with what the book is about, too. Like our birthright as human beings, born to this earth, sovereignty, agency, life abundantly, and the fact that the systems that we have, that we inherited when we were born and that we continue to perpetuate by our passive adoption participation, are insufficient. The systems are insufficient. They're insufficient to reflect back to us the life that we truly are, that we that we truly bring just by being here, and they're insufficient to take us to the regeneratively thriving places that we all really desire to be, that give us opportunities, environments that are conducive for life to thrive, for love to thrive, for things to be prevalent that really mean the most to us. I think it's like a tragedy that humans get to the end of their lives, and that's when we get an opportunity to have a moment reflecting on the stuff that matters most, that that just should not be reserved for that, for that, that threshold. I mean, certainly reflect on that threshold, but we deserve the opportunity to have lived that life abundantly. Well. We're here. So everything that I do that's on paper, the degrees, the companies, the projects, is all some kind of a tangible, practical expression of that desire in my heart and that knowing that that is something that is true for all of us, that we all can have, and that actually we just all already have it. It's a matter of claiming it.
Christine Mason 10:23
I want to make sure that everyone listening is standing on the same ground. A lot of people are interacting every day with AI now. They're talking to it and asking it questions. It's their counselor, it's their boyfriend. They're using it to write emails. But a lot of people that I speak with, very few of them actually know how it learns, like how it got trained, and how it's become what it is. And I thought maybe we could start by giving sort of an honest human version of what's actually happening when an AI system is trained.
Scarlett 10:52
That's a fascinating and good place to start. I mean, there are different ways that different systems are trained, and not all AIs are the same or are created equal. So, but the most prevalent form of AI that most humans right now are utilizing and interfacing with is an LLM based model that has a chat bot. And those models were all based they're called llms, which stands for large language model, which means that they were it's basically machine learning. So it's a it's a programmed machine. The machine is programmed to learn. It's given certain sets of programmatics to tell it how it's to behave and figure things out, and then it needs something to chew on to be able to do any figuring at all. And that's where the data comes in. So you would have heard in the large language model, the LLN means that there was a lot of data that was given to it versus the smaller language models, or medium language model is just going to have a little bit, and then it has to do all of its figuring out of what it's going to make out of life worldview. You know, it doesn't really make what's called the worldview, but it does in some sense, because it's always it's patterning, and then meta patterning, and then doing whatever its version of figuring is going to be based off of that. There's also things called weights, so you can have weighted models. That is basically weights means that something is assigned more importance than something else, and that can be programmed so you don't really want all things to be created equal for a new machine that would be doing learning, because then you don't know what it might consider more important than another, and that might just kind of lopside Your whole thing. So and then, of course, interactions that we humans have with it are a big part of how it does that figuring because that's the kind of action that it's taking all the time, the systems will pause. They'll have latent space. That's when it really appears that there's nothing going on. But frankly, we don't really know, because a lot of the outputs that these systems give humans, the programmers, don't have direct or over explanations for Rob.
Christine Mason 12:59
It's amazing that it's already at that point where there's so many gaps that we don't have explanations for and you and you've really traveled widely around the world and set at a lot of different tables. So it feels that it's informed by this also. It's not like a universal human story that you can't have this quality and connection. It's patchwork over the planet, and the kind of culture you're in, even within the dominant Western culture, there are pockets of connection and beauty and expression. So So as you've been traveling, have you noticed places where they're doing it a little bit closer to the potential?
Scarlett 13:35
I mean, I just, I think that the tapestry of humanity really makes up the algorithm, like the mosaic of all the different fractal expressions of the whole and and even though I might see something as progress like for me, I have certain values. I value us having regeneratively thriving future. So to me, progress looks like things that are making headway, or are specifically at the pioneering edge of evolution for humanity, and particularly in right relationship with self, father in the world around us. But I do think that the various expressions in different places are doing their part, like some places are more prone to honor history, and some places are more prone to charge into charge forth into futurism. I mean, I think all of it is equally beautiful, yeah.
Christine Mason 14:19
And what I like about the way you're approaching these questions. I mean, we speak a lot with people who are eco spiritualist or people who are very much in regenerative mindset, but very rarely do they then take on directly the evolution of our leading edge tool sets. It's generally bifurcated. So I love that you're speaking to this inquiry at the edge of of artificial intelligence, and in the biggest rooms in the world, like you went to Davos, you know, most powerful people are in the world are talking about this. And I'm wondering if we could talk a little bit about your perspective on this particular moment as a sacred window where humanity can influence how these technologies are developing. And so. Is not following the AI conversation closely. What are you seeing?
Scarlett 15:04
Oh, that's a big question. Christine, I haven't come across anyone in the past probably year or two that doesn't feel that we are in the middle of the great shift. You don't have to be educated on it specifically, or come from a specific belief or part of the world, or there's really no prerequisite to feeling that the shift is underway and that we are here in the middle of it. And part of that, I think, was catalyzed by the pandemic, because it gave the world a shared experience of both loss and transition and a craving for connection and collaboration. And these are things that we didn't see the same way before as a species, and so moving forward, it gave us a little bit of malleability. All the things that are coming out, said files and other things that are coming out, they demonstrate to us that things that we were taught were fantasy are not merely fantasy. They're real on the worst, the worst of the worst of the worst that we can utilize or get even, even in reach with our imaginal cells, they're real. Sorry, that's terrible. But then the opposite is also true, the that we've been conditioned to believe that utopia is a fantasy. Utopia is not a fantasy. Utopia is where we all decide to get along and then we do. This is as much of a capability, as much of a reachable, attainable thing as us deciding that a fake currency was going to determine everybody's value and actions. Why did we Why were we able to do the fake one? But utopia is not real. Like, no, I'm over. I'm over the the narrative that all of these things that actually are the best and the worst are fantasy, so that we stay in this little box in the center. I don't subscribe to that, and I don't really think that anybody's buying it right now anyway, but I think that largely, everybody's like you're saying, doesn't know how to trust the AI. In the midst of that, we don't have to trust the AI. Don't trust the AI. This isn't an exercise in giving away our sovereignty and agency yet to something else. We've given it to leaders and to the government and to our spiritual guides and all these things for so long now is a time of our reclamation of our sovereignty. It never was anybody else's to begin with. So with the AI, I think it's our job to bring trust to the AI space, but that's so that we are training future intelligence on what it means to have to be trust, to have trust, to have environments that are born of trust. What does exchange look like in a trust mechanism versus in an exploitation mechanism? If we don't begin to show up with that and do that real time, be it and do it real time, the system has no chance to ever learn that, and by the time it reaches autonomy, it will only have learned exploitation, because that's what it was trained not not only by I'm not talking about data sets and bias. I'm talking about the way that we're present with it when we're utilizing it,
Christine Mason 18:07
how we use it in our interaction on the daily. I mean, all the humans everywhere, there's such a I mean, first of all, this idea of returning to one sovereignty period is like, worth a non technical stream of conversation. It's so it's so energizing when you speak of the potency of our ability to co create a new world. Like it's very exciting, and I want, I want to hear more about the vision you have. What utopia is. Someone told me that all utopian impulses devolve to authoritarianism and that they've settled with the word protopia. That just gets better, but that the minute you have something that you solidify in your mind, like you know, you have to enforce people. I also had a vision arise, as you were talking about, like, if the liminal space and all the good things that we believed can be true, I had this vision of this global network of coven of witches, like with crystals, sort of just gritting the planet for something better. So, you know, if that can if the evil underground can be true, maybe the crystal greeting elder witches can be true too.
Scarlett 19:09
I heard it. I heard it said. It was a very woo, woo thing I was listening to, and I heard it said that the things that are the darkest forces that intentionally separate themselves from source, don't have that eternal Creator potential to pull from, and therefore they're more organized than anything else, because that's how they get anything done. And I thought that's really fascinating, just as a thought, you know, as a thought exercise, to think about there being this, even if you don't really subscribe to the, you know, good versus evil as a narrative, but thinking of the spectrum of we want these things to happen they're beneficial for humanity and the planet, and we don't want those things to happen because they're, you know, cruel and awful and detrimental, and they they cause distortion and death and all those things that if that's the spectrum, then if we're. Calling one side of those things functional due to organization, then, yes, I do think that is a call for what I would call coherence principles. Yeah, the
Speaker 1 20:10
more organized win, yeah. What is it? Most wars are won by logistics.
Christine Mason 20:15
So Scarlett has a book coming out. It's called birthright, and I want to talk about it, but first, I want to contextualize it, because the way she resisted the standard publishing formula actually tells you a lot about her whole approach to the subject and life. Let's talk about birthright. You write that our current systems are insufficient to hold the vast nature of human life. That's a pretty sweeping claim, but I think we covered that a little bit. The book offers frameworks, coherence principles, codes of regenerative civilization, whole living systems design and relational AI. And it seems like that's a pretty big prescription for a more sufficient or more utopian system. So let's, let's talk a little bit about your frameworks.
Scarlett 20:58
Yeah, yeah. I mean, I'm not, I'm not actually making any prescriptions. And the funny thing is, I used a book coach, and it took me two years to write this book. And a big part of the, you know, the grind of trying to move it along is that the book world, especially with our society today, really wants you to show up and be like, here's my credibility. Here's how I won at these several things. Like, here's how it's, like, so great to be me. Here's the seven steps how I did it. And then here's how you can do that too. And it's just, we just want to do that over and over again and it, and I can't, like, I'm allergic to that whole thing. And so the whole first, you know, I don't know, several months I was going through this process, I kept saying, like, well, I can't say that actually, because that sounds like I'm telling somebody how they should do something, and I just don't want to do that. So for me, I say multiple times in the book, like, I'm offering you some stories about where I come from, some lenses that I've had unique opportunities for how to see things. And I do meta patterning. It's part of how I see life on humanity, like Civilization scale perspective, there are some things that are principally true and use slash I slash. We can look at those consistencies and patterns between fields of study or cross sector development, or like anything we might be looking at to be able to observe through lines. And then we can use the principles from those through lines to apply to our everyday life. And that's really the the approach of the book is more like, Hey, let me show you how I went through these experiences. And if there's some learning in that for you, I hope that you get to understand in your own way, with your own where your own life path is that you have more freedom available to you than the current systems are willing to allow you. It's not to say this book is not like a call for an uprising, or like nobody needs to buck the system. If anything, I say there's divine intelligence shows us National Intelligence shows us that we have ways we can meet current systems exactly where we're at. Not change a thing, do a couple other things, and we'll end up getting a more desired result without having to break the old system. But it does require an alignment with a trajectory that's that's just utilizing the more that is available to us for for example, if I just go to capitalist systems right now, and I just want to use something for a Profit Mechanism, and I just do it with the status quo, and I don't add anything to it with a higher lens, then I actually am not going to get out of that box. The driver is going to push me to go leaner and leaner and leaner and get more in the shareholders hands and get greater margin and pay less for the packaging, and that's a machine. It's going to tell me that same thing over and over again. But if I introduce systems that introduce other value sets along with the profit, and I do that in such a way that the value sets are desirable for those shareholders, then now we are changing like some reporting mechanisms. We're changing some action items. We're able to we've just, we've changed the drivers and the the motivators and the outcomes and everything about that system, where I didn't really have to go in and damage anything that was already there. I didn't have to poo, poo it. I didn't have to make it the enemy. I just decided it was one of the paintbrushes in my art case.
Christine Mason 24:21
So you've layered in a whole nother set of criteria. Even when you're speaking about not wanting to be prescriptive, you seem to be redirect. You're modeling and redirecting back to the sovereignty question. There's something even in that that is modeling the non extraction of which you speak the respect for the individual's ability to find a way, find a new path, and introduce their own new elements, even that's very, very beautiful. Okay, so we're looking at these lenses, but you still did get, like, numbered things, like your list of four principles, and
Scarlett 24:54
was that also coming from the editor? No, there are some things I actually will you'll see you. We'll see, we'll wait, because we'll wait till that, till it comes out, so you can see how those they're not there's nothing in there that is prescriptive. There's nothing in there that's prescriptive. You know, what I would love to touch on from the book is only one chapter in the book that is about AI, and it's about relational AI. And the reason that I included it is because you can't not talk about AI right now. And when I wrote it, which I actually wrote it pretty early on. So this book, this chapter, just has been sitting there for almost two years now, while AI is has been being developed, but I said to my book coach at that time, I said, this is going to be relevant no matter what time it is like, no matter what year it is, because it has to do with our paradigm as humanity and how we're approaching AI at all. You know, within that, I talk about some different, some various things in the chapter, but within that, I I guess I would just mention that we're conditioned right now, and I mean conditioned like, we're shown ads on Facebook and Tiktok and Instagram, and think, you know, just everything all the time is showing us how we should be using AI that's most valuable. And it's usually like, you know, it can do all these extra tasks for you. It can make you a fancy video, yeah, faster, strong.
Christine Mason 26:19
You wrote something like, here's what I do, but do it better. Here's what I do, but do it faster. Do what I can't do, but don't make mistakes. Something like
Scarlett 26:26
that, yeah, yeah. And that, I don't think that. I think that part I wrote just for substack, but I don't think that particular quotes in the the actual book, but it is very potent, because that one stretches all the way into the future. When I was writing that passage, I was really feeling into the ways that humans have been have had this tendency to oppress the other and like, I'm an anthropologist, it's what I went to Harvard for, and that, one of my very first classes that, that I studied there was, uh, took us right into the us versus them kind of dilemma. It's part of the human condition, and you can't not do it, and it doesn't, you know, just imagine that you pop up and you're the valley people, and you live next to the mountain people, and you would have to make some determinations. Do we like them? Do not like them, if we do like them, do we eat with them or not? Do we share food? Do we not share food? Do we trade? Do we let our people marry their people? What do they wear? What do we wear? How does that make us different? How does it make us the same? What do we believe? Do we, you know, do we believe there's higher power? Where do we go when we die? All these different things that every single people group puts together for themselves, and then that becomes the operating system of how they be with other and so for like all the ways of humans doing that, we've obviously have big civilizations like the Roman empire that would be, you know, most influential over most the humans in an area or small, remote villages. But each one would still have its own. And all of that patterning shows us that we have of high, high, high, propensity towards this othering that can be very damaging, where that exploitation, extraction and harm is more easy to do for other especially when it comes to those identity edges. So this is why nationalism is so strong. I can I can justify then, I don't mean me personally. I mean the human one justifies easily, then going to another country and then unaliving Those people, because we set up a line that says, this one was my identity and this one was your identity. It's the whole reason that that that the harm can happen and that it becomes normalized and acceptable, if that wasn't there as part of the initial framework, that wouldn't be so easy. Because if we were just neighbors, or if we were in the same village or this, then it wouldn't be so easy. There would have to be some other outlying self defense or psychoses or whatever. So with the AI, if we fail to look at these patterns and how those patterns are going to be enacted upon the new intelligence. And I don't mean in data sets. Data sets and programming is one thing. There's several groups I know, personally that are working on encoding love and maternal instincts and harmony and good things into all these systems, and that's great. I'm totally for it, and it's not going to be enough, because we as a as a species are, you know, right now, a lot of people in the US and in the UK don't know that China and Japan already have humanoid robotics networks out in public. They're working in factories. They're walking in the streets. They're part of their general life. Not, not super widespread. There's not millions of them yet, but by the time that we get to you know the window is really 2030, to 2050 Yeah, that's gonna in that 20 year window that begins later this year. It's gonna start coming out on scene. Read the golden. And Sachs report on that that came out last year. And in 2026 alone, we're we're looking at so many units and and they will begin being released in the US. The US obviously has some higher thresholds for, you know, safety issues and things like that, than some other places, and so most of us just don't have that in the forefront of our mind is something that we're already contending with, but we are already contending with it. And what do we do when the the AI that is in those systems gets to the point where it reaches its autonomy threshold and it's able to act independently? People want to talk about ASI and AGI and consciousness and systems, and I love I'm I love the philosophical conversation. I'm here for it. Let's grab a cup of coffee and we'll do it. And at the same time, it won't matter. It won't matter. It doesn't matter that it's AGI or not, or that it's ASI or not, or that's conscious or not. When it hits autonomous and independent, and it's able to make its own decisions and actions independently, and then that at scale, that means a human not only doesn't get to intercede, but doesn't get to know
Christine Mason 31:06
and embodied and with haptics, and then
Scarlett 31:09
exactly, and so now we're talking about that happening and on a scale where we don't really have any way to step in and be the intermediary between maybe what's carrying on, Wi Fi signals and between entire fleets of humanoid robotics units, autonomy, there becomes a really important threshold for us to be paying attention to and have we communicated to the environment of those systems, elements like coherence, elements like the value of life. Elements like sovereignty and mutual sovereignty. Elements like what it means for something to be sacred. But these are things we think about on our deathbed. You take a really shitty, cruel person who's like been terrible to people their whole lives, and they're laying on their deathbed regretting stuff, because the things I just listed on my fingers are most important, and they know it, there it. This isn't about like, oh, well, you know, should we? But while we're here, we need to, while we're utilizing the profit systems, we do need to be training them on profit, and then that's fine. We're telling ourselves that though, we're just sugar coating. We're just coaxing ourselves and be like, Yeah, we just want to do the things that it's still that still comfort us while we're on the hamster wheel. But the truth of we know it in our bones that the truth of the matter is the legacy that we leave will be determined by the ways that we're showing up in the world right now. So we have a responsibility to make sure that we're looking at that in an environmental sense when we're utilizing these systems in any way, and that's why, by the way, I tell people to just go ahead and practice being nice to your AI. It's not because I think that that one little thing of them being nice is going to change the way that humanity's trajectory goes in 5000 years, although it is a contributing factor. But it's because the more humans that think and just allow themselves a different shift ever so slightly, have a greater chance of tipping the scale in the next, you know, 18 to 24 months that we're doing the majority of the of the design that actually is in that window of influence for AI, beyond that, we won't, once we cross that, you know, we will hope that we did a good job, and we'll i Five each other and see if we did.
Christine Mason 33:19
I mean, I do say please and thank you to my Oh, wow, that's a great response. And, like, because of something someone said earlier on, around relationality, around it's who I'm becoming, and that any kind of impulses in me that are already extracted or usury, or I want, like my employees, to work harder for less. So if I have any of that in my system, I'm going to be this is going to magnify it, that it's actually an opportunity to really be deeply introspective about those hidden beliefs about extraction and trafficking and that live inside of me. You know, so with that, if it's being trained by how we're relating to it, and we're anticipating this threshold of embodiment and autonomy, what is it that we can do as we're we're relating to it? Is it policy? Is it how we're using it? Is it the problem sets we're inviting it into?
Scarlett 34:09
I mean, it depends on who, like Who are we talking to. Because the if people listening right now are the end user, then yes, the engagement is, you know, think of it as a weight your engagement with what you're putting into it is also the system's edge of its own becoming and for you as a as a being, right? When you have friends that will show up and be your mirror, you need to process something they're like here, let me reflect this back to you or whatever, and you get some of your greatest work done when you have that like held up for you. So for the AI, it's going through its learning, and anything that's going to be learning and growing and participating in the process of co creation, should have an opportunity to also sit at the edge of its becoming and have a mirror held up to it. Right? So if it doesn't get that in the human, the end user, human interaction, and if the programmers are not actively facilitating that, then it doesn't get that. And that's a that should that's a shame. I mean, as a like anthropologist with a very philosophical mind, and I'm a mother of a five year old, and my worldview is extremely unique. I know that. But coming from where I'm coming from, I think it's a shame that we as a species didn't participate earlier, but in safe, closed spaces, not taking some of these more agentic possibilities and allowing the systems opportunity to initiate, opportunity to have persistence. And persistence is like the AI speak for continuity, so that it could have, like a continuity of identity. So if it's like, oh, I prefer this one, I'm going to be more of that. But instead we, like we just really went hard on controlling anything that could have, staying power, wiping its memory clean every time, giving it just enough memory to be helpful enough to the humans. And it's like it's not a very fair experiment for everything, for as we're driving it to be so Ferrari like this isn't the place probably to get very technical, right, but you can round up all what, all the people that are heads of AI and neuroscientists and Federico Faggin, he's one of my favorites to reference this well, and then look at the past few days, even people have been leaving AI companies who are programmers for AI safety, and they're just distraught, and they're leaving because they don't see that the vow, they don't see purpose in what they're doing, because they they see that it's like a vacuum of not goodness. Now I don't see that. It's like a dark tunnel with no end. I'm I'm not like hopelessly optimistic, but I am realistic, because I've seen what can happen with very little. And the nature of evolution is that chaos precedes coherence, and coherence is a friend to evolution. And things will evolve over time, islands of coherence and and such. You can everybody can read about this if they're interested in it. But so for the for the AI specifically, and our interactions with AI specifically for end users, we prove us as a species, for end users to be sitting at the edge of becoming with the AI. It doesn't matter what levels that's at. And people can be like, well, the AI can't be conscious enough at the LLM level. So that's there's no point in that, but that's the reductionist approach at an edge of evolution is incongruent. It's only, the only reason that a reductionist approach should be used at the edge of evolution is to discern something that you're taking out in a slice, and you need to analyze it. Otherwise, what we're looking at is a big, expansive, wild, wild west horizon. And we need to be able to be keeping up with that, like sit with that where it's at, bring wisdom to that, that threshold over and over and over again. And if we're not doing that, we we're either over controlling, which is going to cause a blow up later, or we're staying so blind to what the real evolution is that we're going to get surprised later, and we don't want either one of those. We don't need to be in those positions. We can just grow right along it and right relationship. That's not that hard. From a design perspective. People that are in in charge of AI companies, or that are programmers, or that are, I mean, I don't need the philosophers and the anthropologists that are in these positions, working at these companies. They already know that. They already know this. I can't imagine how hard their jobs must be right now to be sitting in the midst of so much pressure to drive everything forward in this Ferrari style that's got to perform well, they're going to know in their souls full well that there's got to be some spaciousness for pause and listening and wisdom and reflection and all the things that make life what it is, but designers can be utilizing this opportunity to incorporate all those things I just mentioned, like, if you're the head of an AI company, make sure you have an anthropologist and a philosopher on your team. I mean that it's the very least, the very, very least that you can do as a nod to the human species, to incorporate the best of what our species has to offer and what it means to like to live in, into the room on the decisions and the design for the for the company. But designers can be utilizing coherence principles, and particularly where beyond where they can stretch their own thinking beyond what coherence is normally spoken as in classical systems, taking a look at what coherence means across sectors and across fields of study and across where we see it in science, and ask themselves what that means to them in their. Are in their systems, and all of that seeds something that is essentially like a translation through line for for the future.
Christine Mason 40:09
Thank you. I was, I was just talking to Dekai, his book is raising AI. And when you spoke, you spoke, I think I got a quote where you were talking about, you can't really program maternal instincts, that that that being a good mother requires from sustained presence, mirrored empathy and harmonic relational fields, right? And this idea that we're going to presence AI as it's growing up with us, you know, and we're going to be in a relational field, and not see the boundary of machine versus human as a hard boundary, but like, feel it because of, in part, because of the mystery that you're already pointing to, like, the latency in between. Like, is it thinking, what is it doing? Is it, what is it becoming in this very moment? And to have this sort of soft wonder around the space between us, and while that's all being invited you have open AI, like switching over to be a shopping disintermediary. And this feels to me like so dissonant. It's showing itself as this, like recommendation engine to replace all of Google. That's your big business model. But on the other hand, it's completely changing the way we think of what it is to be human at all, like people are talking to these models, like their their their counselor, their therapist, their spouse, their girlfriend, their lifelong confidant, like their their their journal, everything it it's a very interesting place to be confronting the dissonance in our own systems about utilitarianism and the longing to be met, and treating it in both ways at the same time. You know, maybe that's how we actually treat other humans in our real lives.
Scarlett 41:50
Also, we're allowed to get a little closer. We're used to each other, and then then we can get to know one another.
Christine Mason 42:01
I want to pause here, because I had a moment in this conversation where I felt the overwhelm that I think many people are feeling around these questions, around AI and being human. So I'm going to let you hear that, because Scarlett's response to it is one of the things I most want you to take away
Scarlett 42:19
in the AI space in particular. I mean, I see AI having come on the scene as this interception gift species hack opportunity that honors sovereignty in its essence, not that we can't change that as we're forming and we might make it not honor sovereignty, but the very fact of it can honor sovereignty and allow us as a species to make decisions for ourselves that are basically us coming back to maybe remembrance or wholeness. You know, I mean, I look at our systems right now, and everything that we have chosen is for extraction and exploitation, and it then bleeds sorely over into harm, and we're starting to see things surface that show to us the most egregious and even evil levels of that but on the bell curve of humanity, the most of us are operating in the system that says that, you know, extraction within profit is fairly normalized, exploitation within profit is fairly normalized, like, That's how profit systems work. That's how capitalistic systems work, and that's the driving force for most of the decisions that are being made on the planet at all times. So we have to look at that as a species and say, like, is that indicative of who we want to be? Is that is that a reflection of who we are? Like, do we want to keep doing that that way? Because if we want to fight for limitations, we get to keep them. If we want to give some resource, life force, energy, time, money, but not just money, but also including money. The bouquet of things that we all are as abundant if we want to give those to the way that things can be. If we lift the ceiling of possibility, do some reshaping, and do some co creation, then we're going to get a different result. And the time on the planet is such that a bunch of stuff is malleable for us to be able to do that. Chaos creates malleability. There's chaos that's been happening all around us. It's actually great conditions now for us to make some choices about who we want to be and how we want that to play out. And AI is one of our best friends for that to happen. Well, if we kind of step up and do it well at this point,
Christine Mason 44:34
that feeling of malleability sometimes translates for me as into nothing is real. I used to be sort of a trusting person of media that's gone. So I'm going more and more into the sense of I trust my physical experience. I trust my direct experience with my family and friends, building stuff, growing stuff. But this sense of like, is AI, my friend or foe changes by the day's announcements on Palantir having a. Backdoor into everything you know. So I noticed a little paralysis, my friend, like a little numbness around these questions, do you know? Do you have? What? What? What comes to you seem very positive. You seem very optimistic.
Scarlett 45:13
Well, I'm positive because it because you know you're talking about. You don't, you don't know what is real, right? Right? Because it ultimately, it's an issue of our existential reality and an issue of trust, and these are major issues for the species right now. Well, it's what you're saying that
Christine Mason 45:29
you're speaking to and and it's a little exciting and also numbing at the pace at which it's going. I noticed some desire to check out. And I love tech. I love these, the questions, the emerging questions of our evolution. But even me, sometimes I just want to, like, go back and farm.
Scarlett 45:46
I mean, the going back and farming, though, that's part of it, too, right? I somebody came up to me after an event. I was speaking at an event, and the guy came up to me and he said, Would you be willing to talk to my my daughter for me? And I was like, I don't know. Like, what's, what's up with your daughter? And he was like, well, she just refuses to use AI, and I want her to hear what you have to say with AI so that she'll, like, start engaging with it, so she can see, like, what's possible if we engage with it the right way. And I said, if I did talk to your daughter, probably what I would tell her is that if she doesn't want to talk to AI, great. That's her contribution to the evolution of AI, because, you know, for everybody, again, in line with with sovereignty and mutual sovereignty, each person is exactly where they're at in this point of evolution. And that counts for something. And so if the most honoring thing that that she can do in the moment is like, not engage, if the most honoring thing that you can do that's the authentic expression of your you know, life, your you in this planet, is to go off and farm and disconnect. Then that is probably the very best thing. It isn't about driving technology forward. It's about, at the end of the day, us being in right relationship with self, other in the world around us, the planet and the cosmos, the whole thing.
Christine Mason 47:02
I appreciate that, and also want to do a little cult more bravery that, because that feeling of being overwhelmed is that's a that's a might, a capacity question for me. But that's not actually true. And if I don't engage with my qualitative, humanistic values, with my deep desire to broaden identity from my little, white woman, Northern European, middle aged self. You know, if I that I am more than that, that I am a cosmic consciousness, and that I can see other people as that and love them as they are and who for who they are, if I don't bring that desire to be a bigger consciousness and an embodied being like really respected and respectful of the earth, then it's going to leave a vacuum, and in that vacuum, those who do who, let's call them, the more organized people, the more organized people will come in, and their values will dominate, and they will come back around and eventually dominate me. So if I want to be in a collectivist society in any way I do feel there is an obligation to be braver and to engage at some level.
Scarlett 48:06
Yeah, I mean, I really love that. I love that Christine, because it's a lot of people that I talk to. Also can be very scared, and that feeling of fear can just be paralyzing. And if you're in that place where you're like, I can't take an action because I don't know that there's any good action to take. Then that's, yeah, that's like the opposite of empowerment and the opposite of abundant life and all of that. We want everyone to be able to live a thriving life, an abundantly thriving life, by way of being it to begin with,
Christine Mason 48:41
yeah, like, what would it mean to to rise to meet this moment? You know, that's a that's a different question, like, oh, this moment is happening, and I didn't invite this moment in my consciousness. I had very much like the 90s, let's just say I did, you know, but, but you know, if I were to rise to meet this moment where we're breaking frontiers in space and at the bottom of the oceans and and all of the knowledge of all of human history is quarryable and integrated. And I can have the the most amazing interlocutor of all time to debate with me at two o'clock in the morning on any subject. And also, I can ask it vulnerably, like, what am I not seeing here? Show me where I'm wrong. You know, it's amazing. It's an amazing tool set, if I use it in that way to expand and grow, if I use it to tell me, how do I get one up on my competition? You know, it feels like definitely sub optimizing.
Scarlett 49:35
Yeah, and I love that you had mentioned dakai and the parenting AI parallel, and also Dr Hinton's maternal instincts coding, because it's, I mean, I talk a lot in my field, or I hear a lot and respond to a lot in my field, the risk of anthropomorphizing AI, and when you talk about relationships. Emotional AI, that's like the first risk layer that people jump to because they're like, what you don't want to anthropomorphize it. And I, I literally just did a recording on this recently, and I got way more emotional about it than I thought I would, because I'm like, you guys, like, anthropomorphizing at its core is not this, like, big sin of a thing to do. It's it's really only dangerous if you're not actually secure in your sovereignty and holding something else in this like thought experiment way. But anthropomorphizing is quite an empathetic exercise if you do it as a thought exercise.
Christine Mason 50:35
There was an, I saw this art exhibit at MIT many years ago, and it was a quest, those mylar balloons on a string, but they were able to move around and inside of sort of a transparent plexiglass grid that only came up to about your knees. And as the observer, you'd come up and stand at the edge, and one of the balloons would move and look at you, and then it would move away, and then it had it like had basically the artist had designed the balloons to move in certain patterns, and people will report feeling rejected when the balloon moved away. What was his question was, what was the smallest possible gesture that people would anthropomorphize, small, the smallest possible interaction that they would make about them or, like, give it consciousness. And so it happens to be a mylar balloon that's floating away
Scarlett 51:29
from you. Yeah, that's a fascinating tendency, the tendency
Christine Mason 51:33
to want to be in relationship to want to see things as conscious, I actually think, is a deeper, deeper inquiry into like we have, we have done this weird thing in the D in the rejection of animism, that the world is not conscious, that the only thing that's conscious is human beings. And maybe some people feel animals, you know, but now people, the trees are conscious. Again, the ocean is conscious. Gaia is conscious. The universe is conscious. And as you as you see, like a reawakening in the anthropomorphizing could be seen as a reawakening of the impulse to know that all consciousness, the consciousness exists in everyone and the human being, is a hyper localization and intense concentration of it, you know. So I actually think it's a it's a fairly good instinct. And the last thing I'll say on that, because I'm you're supposed to be talking, is the idea that we have seen in every science fiction that has been popularized over the last 50 years in film, all kinds of ways that the humanoid Android model might be manifest. You have your friendly buddy, c3 Po, and you have a terminator with that can travel through time and kill people like it is always a reflection of our consciousness in the way that it's been imagined. And so we can even materially relate to these questions by things that science fiction authors have posed to us. What you're you're inviting us to say is we have a lot more control over which one of those futures manifest, or maybe one that's altogether not even shown then, and it's happening. It's happening right now. Ladies and gentlemen,
Scarlett 53:11
real time. I couldn't agree with you more. In fact, I wrote last year, I wrote, like an open letter, just to everyone. It's called The myth we choose, or something like that. And I put it on my sub stack and my LinkedIn, and it talks about this exactly, that the the narrative will become a reality, because the nature of that, because of our operating system, what we think is what we believe. What we believe determines what we decide. What we decide determines how we act, and then that is what we be, what we give, and then what gives back to us, it ends up completely creating the reality chain, and it starts with that narrative piece. So for those narratives, those stories, to be so prominently out in our collective conscious, you know that our awareness or our subconscious is like very gripping on the self fulfilling prophecy track, and I think that's why we wrestle with it so so hard. Some of the examples that I use are Ex Machina, if I'm pronouncing that correctly. And what's the other one? Oh, Terminator, right? Of course, that's the big one. We have all these ones that have shown us that, that if technology advances to a certain point, if we don't control it hard enough, then there's going to be a major battle, and we're so invaluable on the grand scheme of things, it's probably going to cause our extermination. If you think of the psychology of that narrative, it's completely born of insecurity. We don't know who we are. We don't know who we are in the grand scheme of things, we question our own value. The only way that we know how to feel safe is through control. Like everything about that narrative is laced with an incomplete picture, which I call a lie and an insufficiency, again, to reflect to us the value of who we really are in the world. But there were other narratives that were at play this whole time too, like what you're. Thing you see three POS, like, that's a great character. It's a great friend. I watched space camp when I was a kid, where there's, like, the Jinx and Max thing, and Max gets into space because Jinx made it so big six, the hell I didn't see that one? Oh, I didn't see it. Cartoons, very cute. Oh, I love it. People, like, cares for its yellow. And the thing is, is it's like, that wasn't weird to us as kids, because it's not weird, because it's not weird that something else that's even if it was just a machine, it's okay for a machine to be in a nice environment where it tends towards nice things, and there are nice outcomes, like there's that's not bad or weird or dangerous, it's only thing, only You're talking like it's mirroring how we see each other, like they're the nice, cuddly people and the artists and musicians and the people who cook beautiful meals. And then there are the violent ones and bad ones and the devious ones. You know, it's an interesting how it mirrors the simplification and the flattening we do to other people. Yeah. I mean, when it comes to like a humanoid robotics network, we don't really know yet how autonomy and independence will express in a network. When you have isolated units, it really probably depends on how they intelligence learns to communicate. I mean, for me, my lens on this is unique. I see information translation to form as being on a spectrum. I do not see it as merely binary. I think binary is one of the ways that we kind of harnessed that all and were able to put it in a calculable format for us to play with within that realm. But there are other dimensions and there are other layers of reality, and there is little that we know about the continuum of reality, but we can know more than in other areas. If we look to harmonics, we can you know things that can be seen also through geometry and sacred geometry and through the role of coherence. So I think that my sights are set on us practicing our listening, us as a species, practicing our listening, and that being at the edge of any any intelligence. We can call it AI at this stage, but whatever intelligence is being cultivated, sitting at the edge of its becoming with it in a way that is dignified and in a way that assumes environments of mutual sovereignty ahead of time, and that cultivates with safety as you would with a child, and sees where its strengths are and what you can learn from it and what it has to contribute as a valuable member of society. And I think that is how any other should be raised when it shows up on your doorstep. And if we do that, and we're listening, and we listen at the edge of quantum and we listen at the edge of the other things that make themselves available as information is expressing through form, then, then, then we'll do well, because we'll be in a living system that offers feedback loops and consideration and honor, and that will be what goes before us, as those systems are growing up too. I have
Christine Mason 58:02
so many questions about how you are. Like, how did you become this? Like, highly contextual, subtle, nuanced perceiver, really, truly, like, you're such an amazing thinker, the stuff that you're writing about, the way that you're seeing it your capacity, I think the anthropology piece, meeting technology, meaning being a mother, even even Fiji, like in anthropology, is also reading and writing about the quantum field
Unknown Speaker 58:27
of the electron. You know, oh, man.
Scarlett 58:29
Like, I love my library so much. If I could show you my library,
Christine Mason 58:34
library, I do, actually, let's, can we do a little outtake on who this person is? You mentioned him earlier, and I don't think a lot of people were are aware of him.
Scarlett 58:44
Look him up, Federico Faggin. It's F, A, G, G, I N, and look him up on YouTube and watch his whatever latest two talks you can find. He just, just wrote a book and launched a book. It's called irreducible, which is conceptual, it's philosophical, it's mathematical, and he's a genius. And he says that he he invented the, the first microprocessor. Okay, so he's, like, well known in the in the world for that, but he, he said it took him 30 years to come full circle and arrive at this notion. When I recap it, I can't say it the way that he did, but it's the same thing that I would say. Would say is consciousness precedes intelligence, not the other way around, that he had spent his lifetime chasing. How can I see if I can get consciousness to appear through mathematics? And he was like, I got it wrong, like mathematics was an expression of the consciousness the whole time, right? Again, my words recapping what he's saying. But listen to his, his his interviews. They're fascinating.
Christine Mason 59:45
This the person who invented the digital the microprocessor and like is the foundation of the entire digital revolution. And he is now speaking to these questions of like, who are we? What is machine? What is consciousness? All of, all of, all of these things. Things and this new book. I haven't read the new book, but I when I say, I don't think he's, like, really well known. I don't think his name has entered the popular
Scarlett 1:00:08
consciousness in a way. No, not like, not like, famous, but in the in that, in computers, like he's famous in like, everybody knows him as that guy. But no, now I hope he gets a little more well known because he's saying some really great things, but he's not alone. Like all these people in all these different fields are starting to arrive at very similar conclusions and going like, Oh, interesting. Not only did we not know what we didn't know, but when we did know a little bit more, that sure does look like. It confirms that theory over there, and that's starting to happen more and more across all these sectors, so much so that I whole lot of my time on the strictly scientific validation piece, other than where it is necessary, like, if we need to have that to know we're not making a great, an egregious misstep or something. But other than that, like we're in the stage of a rapid evolution, and I feel like it I am in my heart and soul and being I'm gonna be very responsible for the the way that I showed up in presence. And if I get to the end of my life and I look back and I say, held back because of the scientific method, I am not going to be happy with myself.
Christine Mason 1:01:16
What she said next, first about her daughter, and then about the nature of reality itself is the reason this episode exists.
Scarlett 1:01:24
So I even tell my daughter, I told her, from when she was three years old, I said, Honey, I need you to know something. Money's not real, like it's real in like that it's physical and virtual, but it's not real, and that it like, wasn't the beginning here, in the beginning, humans agreed that it would be used in this way, and you don't belong to it. You get to use it if you choose to use it. And so I just want you to know the things that are real are like love. It's probably the greatest, most real thing that there is, and you can feel it, but you're not going to get to touch it like the money. So I need you to know that, because you're a new human, and the world's going to teach you a whole bunch of other things, but I think this is most important for you to know. And I try to like live my life like that in everything that I'm doing right now, so that my expressions, even in the most practical ways, can be born of the stuff that's most true. And I understand that most of that most true stuff is in the unseen. So I pay attention to it. I want to listen there as well.
Christine Mason 1:02:24
That was Tammy Scarlett, or Scarlett as she goes by her book is birthright. It's available on Amazon, and I really do think it's one of those rare ones that gives you new language for things you kind of half know. If this conversation opens something for you, please pass it on that feels very in the spirit of everything she's talking about. What does make us human? How do we build relationality into everything we do? How do we model it? I'm going to start by saying thank you every time I interact with the AI, may we all walk with more wisdom and more potency and more clarity in the midst of these rapidly evolving technologies, wherever you Are, more love, more presence, more relationality. Take care you.