All right, we are ready to kick off our second panel. Welcome back, everyone. Um Before we get started, I would like to uh thank our host the Air and Space Forces Association, uh, for gracious graciously allowing us to, uh, have our event here today. So uh I hope you enjoyed, uh, the break, but I wanted to make sure that we, uh, gave homage to our host, and, uh, now we're gonna get ready for our second panel, uh, live long and analyze AI breakthroughs for intelligence-based domain awareness, advanced mission management. And space control. Our moderator would be Mr. Jim Shell, who owns Navorum Tech and who is an expert in space domain awareness, space situational awareness, and over the debris. Jim, the stage is yours. All right, thank you. I guess with the mic is good. Got a thumbs up on the coms here OK what Ä¢¹½ÊÓƵ Allen doesn't know is that I am a bit of an AI skeptic, right? Yes, yes, so yes, I know a lot about space domain awareness, but the overlay of that data rich environment and how AI applies, I'm scratching my head a bit, so we're gonna explore that today and I'm honored to have these panelists. Uh, Doctor Pat Bilton is vice president of Space Mission Engineering. We'll go from this end coming down. He is vice president of Space Mission Engineering at Ä¢¹½ÊÓƵ Allen. He has a background in aerospace engineering, complex systems design, activity-based intelligence, and AI. And last year he published his second book AI for Defense and Intelligence. Nate Hammett Nate Hammett is the CEO of Quindar where he where they are revolutionizing satellite operations with intelligent automated software designed to unify hybrid and proliferated fleets. Prior to that he was the lead software engineer for OneWeb C2, where he helped architect the ground system that controls the mega constellation today. He also worked at Lockheed Martin as a certified test conductor and assembly test and launch operations for the MuoS constellation. He holds a master's and bachelor in space engineering from the University of Michigan. Thank you Nate and last but not least Brian Flewelling, he is the director of strategic program development at Exoanalytic Solutions. Exoanalytics is a private US company that tracks the position and behavior of satellites using the oldest and largest commercial network of privately funded and maintained optical telescopes, providing real-time space situational awareness data products and services to government and commercial customers. OK, gentlemen, Help me with my skepticism, OK, so let's start this off. So SDA lends itself to this very well, right? A very data rich environment, but here's my question where does where does AI start and stop, right? Where does just employing good physics do the job versus machine language versus AI versus automation so could you help me understand some of these lines between these different areas? Open it up who wants to go? Well Jim, I'll start, uh, first of all I wanna thank you and the other panelists, uh, Brian and Nate, for being here today. You guys made a special trip to be part of this event with us, so I wanna thank you for doing that, um, and for all that you do for the community, especially for educating people about space domain awareness. I think a lot of people are, uh, really educated and horrified by the kind of things that you expose us to that are happening in space every day, so thank you um for that thought leadership. You know, your question is a good one, and a lot of people are very skeptical with AI. In the last panel, they talked about how Chad GPT was the fastest growing app ever. And I think that's both good and bad. I think that SAT GPT really opened up a big domain for all of us. That was the thing that motivated me to go write the book is like, hey, let's catch this wave, but I think it was also put out in the wild way too soon because it caused a lot of skepticism because it makes stuff up. And it isn't always right and it violates its guard rails and people are misusing it in crazy ways and for people like you that are steeped in the physics you would go, but it can't do physics or it can't do physics as well as we can and if we're trying to do a correlation function or we're trying to do a pattern of life or we're trying to look for anomalies, there are ways of solving that problem with physics and you go, Jim, I got you, and I'm not gonna disagree that this can be solved with physics, but in every domain. Where we've said, Hey, I can't do that. There was a comment in the last panel like the decades we thought were actually like 2 months and so I too am a skeptic that it will do all those things. And we do have clients that say this is how you solve the problem we know the physics we know the ideal rocket equation we know how orbits work, but in almost every domain medicine, self-driving cars, etc. when you put enough data into the system, there's an emergent behavior that we don't quite understand and so I think there's a possibility that this domain could be enhanced by AI. I think there's promises that if you combine. Generative AI technologies that are grounded in physics, they can help solve this one guardian, one satellite problem, and they can help solve this. I have terabytes of data that I can't even load onto a computer, so I, I know you're skeptical. I'm skeptical sometimes too, and I work on it every day, but I think that we're right at the very beginning of something and maybe that place where we've sold the 13 steam engines. That's great. Yeah and Nate Brian, yeah, please. We need to start with what is the problem that we have today and does AI does machine learning help to solve that? And so where AI is a goal really, which is, you know, let's make machines intelligent and uh where machine learning is a tool so you know how do we take a bunch of data and train on it um to you know to to get the results that we're looking for and then in the space. Industry and the SDA industry as well where can we use intelligence for intelligence space traffic control of assets in space? Where can we use it for analyzing nefarious objects? Like why is that object moving so much? What is it actually doing versus what was out in the public? What are you predictive solutions for what's going on on a satellite. So if there's a degradation in a subsystem or a component, is how can we use machine learning to take the history of what's been going on with that satellite or a sister satellite of its in order to solve the problem that really human. there for you know humans are there to sit in front of a console, look at limits, you know, the Christmas tree of, you know, uh, green, red, yellow, and to act when something pops up. uh, but in the age of proliferation, as we've said, and especially the Guardians one satellite to many just doesn't work and so what what are the day to day tasks that they are doing. Um, and you know if this was 1958 and we're recreating, um, you know, the space industry and like the tools that we're gonna actually be using we would use modern day technology we would 100% use modern day technology, um, but you know what we need to be doing with the modern day technology is solving the problems which is as we proliferate, how many people can we actually assign to dozens, hundreds of satellites or where satellites are actually sending. Less Information about what's wrong and moving more of the anomaly prediction uh on board at the edge onto the spacecraft and so there's a lot of tools for AI NML but where we always need to start is what's the problem that we're actually trying to solve and is this solving that problem? OK, that's great. I, I, I wanna come at this from a more of a human analogy. Um, most of you probably have someone in your life that does some DIY work at home. Right, and as hard as they work they're just not that skilled with their tools. You walk in their house and you're like, you know, I'm glad you spent all that money on that, but man, your results vary, right? We're just not, you know, I'm not gonna see this on House Hunters anytime soon, right? We're, we're still looking for the craftsmen of AI, right? There are people out there, salesmen, they'll sell you something whether they understand it completely or not, so you're gonna get hallucinations and that's gonna be the best thing since sliced bread. But you asked about how it applies to space domain awareness and an exo analytic that's what we do. We consider ourselves a craftsmen for how to make data organized for applications like AI or in the future autonomy or to inform that guardian that needs to make that split second decision or that quick decision to support their mission. And so from a data standpoint, right, if you want to be a craftsman with this, you need to understand which tool to use when, exactly how much pressure to apply, and when it's used in what conditions. So you start with, in our case, optical telescopes, we'll point them at the sky where we believe spacecraft are or are supposed to be. We will take new measurements in the form of imagery and reduce those those detections. That takes an algorithm every time we use an algorithm. We need to translate that. I mean, perhaps you could train an AI with the results of that algorithm. I had an image. I got these answers. I'd like to recreate that process. I don't completely understand that process perfectly, but my AI is an oversized computational machine. I could use this tool for that. The cost is I might need a nuclear power plant worth of energy to train that process to get the same answer that I got out of a very efficient algorithm I've had for two decades, right? So is that the place to apply AI? Maybe if you don't have a solution, but the cost might be that you're spending more power than you need to solve that part of the problem. So if you need a scalpel or a sledgehammer, you need to figure that out in each step of the chain. So now detections go into something that is called orbit determination as I watch an object multiple times, I'd like to be able to describe the motion of that spacecraft or that piece of space debris. We do that through the process of orbit determination, and the first question is from the new data, is the orbit I'm getting the same as the orbit I had? And if it's not right, is that because that object may have maneuvered or is there some other explanation for that data? And as my analyst figures that out, he may write notes called annotations that say, yes, there in fact was a maneuver or there was this change in stability or whatever it is that I'm monitoring as I accumulate the detections, the description of that space objects motion and behavior, and the analyst notes, I now have what is called an expert label data set. And if I've organized this to the point where a machine can interpret this at speed and scale, now I can empower you to make the decisions you need to across a fleet of spare spacecraft where before we were using a slide rule and a pencil to figure this stuff out. It could only do it one at a time. So there's an analogy to I love this movie, uh, a river runs through it, and the child is going to talk to his father he's writing an essay and he's editing his essay and he marks it up the first time and gives it back to him and they just want to go fishing, right? So they take the essay, he writes it again, and his father marks it a couple of times and says it's great again half as long. AI is different. You've been doing your job. It's now about to be replaced by automation, but why is it about to be replaced by automation? Because the thing you're being asked to do is again 1 trillion times. We want to fight the whole space war in the next 10 minutes before we have to be on the pressure to make that next decision, right? The kinds of cognitive load we wanna use our computation to achieve are that much more significant on these shorter time scales. That's now the bar and if we're still trying to achieve, oh, can I do initial orbit determination followed by some statistics after 30 years of doing this problem, guys, we're out of a job, big tech or somebody's going to put you out of it they're gonna figure it out with some other math they're gonna apply a bigger computer and we're going to move on because industry demands that we move at that speed and scale. And so there are places where it's going to be applied. I just hope the folks that are employed to do so are the craftsmen that we need to take it to the next level. OK, I'm still a skeptic, but maybe coming around a little bit. So This next question again is going to be an open one. When we look at the commercial market for AI enabled information products, it, it requires that a customer really know, I think what they're after, right? I've, I've seen this tension between, uh, in particular for the US government as a customer selling raw data products versus selling services which add information content from raw data, right? So AI is absolutely adding this information content. How is the value of that appreciated? Um, how does that need to be understood by prospective customers and how, how is this marketable? Hm I think that boils down to trust. We've sold data. To various customers and that definition of even the word data is loaded, right? Do I want OS data? Do I want state vector data? Do I want data that describes the the the history? Do I want just your scared straight briefing that you gave at the Space Power conference where everybody looked and said, oh my God, what is China and Russia doing this week, right? It depends on the question that needs answering and then I think there is a lack of appreciation for the fact that that data is the integral of the infrastructure from a sensing internet power thing to generate it. And then the analytics and the team or maybe the human uh supervising team that's helping do the curation right? it's not just something that came out of nowhere and and data if it came off of your cell phone, there's a whole data exchange and piracy and ethics concern today associated with are you being appropriately compensated for your data. So it is marketable, but we have to make sure that customers understand that that data is the product of a very complex process that wasn't just we bought some AI one day and it just does all the projects and no one needs to do any checks and balances against it or any of those kinds of things, right? So it is the sum total of an organization's expertise and resources to make sure that that data isn't just data for a use case, especially if that use case is high impact. Right, if it's got high impact, high, uh, risk of or or consequence for being incorrect, right, you know, we call it a hallucination. It almost sounds cute. Well, I don't want to hallucinate if the job is to intercept a missile, right? if my data is going to go support Golden Dome. Then it better be right. The last panel they said be first, be right. That's absolutely the coin of the realm. Yeah, and I'll take the, uh, time and accuracy point. So how is it saving time? Uh, so at Quindar we've created a chatbot that essentially is natural language query. So the conversations that we see our users have on console which is. Um, hey, I got this CDM, you know, this conjunction message, um, what are the mission rules do I need to maneuver? Um, who is the, uh, peer and maybe it's, uh, you know, a friendly, maybe they have actor propulsion, who is going to be maneuvering? Um, and so asking the system, you know, like what is the probability of collision or understanding even like the agenttic approach, which is, you know, taking it to that next level of how can we actually act upon, you know, at a proliferated constellation, you know, hundreds of conjunction messages a day or even to the extent where you're doing orbit raising and your conjunctions are. Constantly changing, um, these are the discussions that people are having on console and so how can AIML provide a solution that can fast forward that decision and is accurate enough whether you act on it or have a discussion that summarizes it versus taking a day to do all that analysis beyond that is like the human power and resources. just to come up with that for the time frame that exists, you know, another example, taking the data route is if you get an image down, you know how can you disseminate that information, information that you're looking for information that you don't have that the software can understand and present to you, how can you have even. You know, threat intelligence, understanding, you know, natural language processing, so taking in different resources um to understand you know what's the threat intelligence for today and you know over a period of time and making the right decision of what are we doing today? Like these are conversations that you're having every day on consoles. What are we doing today? What did we do yesterday? how did things go and as the previous panel talked about is. If you can summarize those up into everything was great, you know, don't worry about it. That is, you know, Quindar that's the end goal is not screens for commanding for understanding situation awareness and position, but more of what's the information that you want and everyone wants that information in a different form factor. So how can you present that to the user, uh, chatbots. Are really good at that or you know uh generative uh user interfaces as well for people will actually just say hey I wanna see a dashboard but I want a configurable dashboard like that is the solution that we are seeing from our customers but that's like the old Henry Ford quote that was just like our you know users would just want you know faster horses uh that is a faster horses moment yeah and Jim, I don't know where we are in the story anymore. I mean there is an element of. AI can do things that it shouldn't be able to do and we actually to Brian's point, we don't know if it's true yet, but it's weird that it kind of looks like it can do that. There was a recent study that came out it was one of the first that's ever had this result. It was a radiologists, radiologists with AI and AI alone. Performing that function and for the first time the AI beat the radiologist and the radiologist at the AI he said no no no no the theory says that. is better than both, and you go, yes, the radiologist spent more time questioning the AI's decision than just going with it. OK, there are tremendous human mechanized work flows throughout the federal government, especially in the IC. Sometimes they call it tradecraft, and you know, this is my tradecraft. To Brian's point about craftsmen and craftspeople is, well, you know, it's my tradecraft. You show me how you do it, and you go, I take this from Excel and then I copy this over here and then I do this, then I color code these columns, and then I sort it and you're like. This is what you do every day. This is how we do. This is how we. The expertist thing in, in the whole world does this thing. And then AI goes like, I just got the same answer. So we're not yet comfortable with that. And you mentioned the thing about like analytics as a service we're starting to see the government acquire analytics as a service or results as a service. NGA released a large contract called Luno where they say we're gonna buy analytic results as one of the enriched products as one of the kinds of things that they would buy. But we're, but we still go and I wanna see the data that's inside there because I wanna know how it made that decision. We still don't trust it, Brian, to your point, and we still wanna know and this is like that scene in that movie, uh, it's like a few good men where he goes like, I want the data. You can't handle the data, right? It's like how did you make this decision? Oh, I used all. All of the observations and all of history and you're like please show them to me Brian and you're like, well, do you have a 900 petabyte flash drive where I can give them to you. So we're still at this thing where we don't know what to make of it. We're confused that sometimes it seems better than us like why is that Tesla drive better than me? I don't know, but it does and. I think, I don't wanna like sound mean to all of us here on the stage, but I feel like we're at a level like I remember the World Wide Web came out when I was in high school. And the kids of this generation would go like, I know my parents were born in the 1900s, so I think the problem is largely gonna be solved by the AI natives that are growing up with this as part of their life, and we're sort of like having a weird oogy feeling about it and we'll get over it when they just turn us into batteries and plug us into the machine. Well thanks, I am. You're feeling better, feeling really old and scared. OK, you're feeling better, so I appreciate that. I'm just trying to be a little edgy since Brian, yeah, since Brian literally insulted my home improvement DIY skills I'm feeling like, like I can go out a little on a limb. OK, um. All right, so some elements we've already touched on on on our sort of our remaining question bank, but I think we can get some tease out some elaborate, uh, more elaborations on this. So Brian. Is space domain awareness data ready to support responsible application of AI? What needs to change for better solutions for AI to enable space control to be effectively trained and applied? It's not ready yet. I'd say there's more data there's more precise data, there's more diversity in data but the data engineering and data organization in order to truly feed those space control autonomous war fighting systems effectively, I think we're in that process. now, right? I think we're becoming those craftsmen. I think we're building the trust that we need to build inside of our, our government customers, but the rate of adoption just needs to accelerate, and it's the threat gets a vote. So does industry, so does the the number. Uh, there have been articles in the last year about the number of autonomous maneuvers that are happening already at scale, and you know, imagine it doesn't have to be China. If somebody hacked a mega constellation and wanted to shut down every launch window, it would be very easy to do conceptually, right? It's the changes that are happening up there. Every time there's a state change, there needs to be an update in the catalog for that model of motion for that spacecraft. We need to generate those at speed and scale. And the only way we do that is to move away from the ideal of having to have a guardian or a subject matter expert in the loop to enabling these things to happen autonomously, so we need to promote ourselves. Congratulations, we all get a promotion, but we get to train these processes and use the tools available to us, hopefully responsibly and ethically and efficiently and without a nuclear power plant if we don't need one. But we need to use them or we're not going to keep up with the rate at which things are scaling, and so it's not good enough to be 6 or 7 years later than SPD 3 and to be trying to solve the problem of 10 years ago implemented in a new computer system because that's not going to be ready to handle the space traffic this year, let alone next year, or what's coming from the people that are already planning. They have told the FCC how many more spacecraft we plan to fly. What is our plan to scale the commensurate amount of information we have to collect, process, and understand to support our decision making to keep up? And if the answer to that is I don't know, then I welcome our AI leaders because they're the only ones that can solve the problem that's coming for space. Yes, a lot is coming, and Nate, I know you'll be able to highlight some of this, um. Starlink The Chinese mega constellations and all these other launches. Why is this challenging? Oh, why is this challenge of applying AI to space mission so hard? I know you have a great background in this, so. Yeah, I've, uh, I've helped build and you know, operate a mega constellation. I know firsthand, you know, uh, what it takes to do that and, um, you know, for what is coming and for the proliferation of space and for, you know, our adversaries and proliferating and trying to gain the high ground is it's, it's a culture change, um, you know, the. Attitude of of speed is built into culture um and so if our uh speed is uh blocked or inhibited by you know bureaucracy, you know, continuing resolutions, um a misunderstanding of, you know, can we use AI can't we use AI, especially in the government. Um, that's gonna slow us down so hesitation just like that, um, is one of the reasons, um, that you know it's so difficult, um, to be able to maintain what we're trying to do today. We need to be using modern technology, um, you know, I know systems, uh, existing systems like airspace, and we all know like TRL-9 is like the best thing that you can tell a customer. Um, and on the software side, we still have to tell customers we're TRL-9 we're not sending anything to space. It's all on the ground. We push a change and it's fixed in minutes with, you know, DSecops pipelines, but it's that attitude of it needs to have TRL-9 before you can, you know, command this satellite or this new satellite, um, and that same concept goes around, you know, how do we manage mega constellations um. You know, one of the things that's really challenging about mega constellation is staffing, you know, like, and that's not the solution, but that's like the immediate solution is, hey, we need to staff you're gonna be constrained just by buildings, um, and the tools that we use kind of back to that comment just about like, um, you know what we're using today in the industry. I I'm sure you all know of customers or or you know uh servers that you're running. That are still running, you know, Windows 2008 and that was because you know it works and you're paying Microsoft who doesn't even update those you know software and security, but you're paying them directly to keep them up and running because that's what the constellation that's what the fleet uses but um that technology does not scale was not meant for today's proliferation. Um, and so where we, you know, need a cultural shift is we have a problem and we have competition, you know, like as a startup, you know, there's two things that we're always looking out for one is making sure that you know we're building the business and that we're staying alive and that we're continuing to build momentum, uh, but at the same time, time is making sure that we're doing this efficiently as well, uh, because our otherwise our comp. Editors will and then we'll be obsolete. You're either forced, uh, you either are ahead of the game or you're forced to change. And so I think that cultural shift in today's attitude will help us adopt, uh, you know, AI in the near future, um, and that's what we're building, you know, at Quindars to show that hey, we have these products we're not waiting for, you know, necessarily, you know, funding for these products for whether it's, you know, the, the chat. Or predictive analytics um or understanding how do we optimize and dynamically retask when the um when um the vignettes, um, you know that we have, uh, are different per customer and now we have to reprioritize because there's a ground asset that's out so a lot of that cultural change I think will shift uh the industry and and where we need to be versus where we are today. Yeah, and I so good segue to something near and dear to your heart, Pat, the staffing of ground stations and you know we've already talked about the Apollo program and the the room full of people of course that's crude flight a little bit different animal, but as as the US government goes to proliferated architectures, what does this mission management need to look like, um. Yeah, so Jim, that's a great question. If you guys have like 4 or 5 hours to kind of go through. I brought some slides. OK, so, so, you know, a couple of big things there, Jim. One of the things is, um, so I like the, the comment. I mean when Seth was talking, I wanted to like get out my phone and start waving it with the flashlight on because he had such a great perspective on on such a number of issues that hey, we can't be operating one satellite with one guardian and it's like, yes, that is so. True, but when you see these, these mission control stations, one of the things I think is bonkers is we always go like Fido go engines go, and you're like, why are we doing that like out loud with people looking at screens? Why don't they just all push a button or just say, are we ready to go, right? It's an algorithm so. First of all, we have to get to an environment where we decide what is the split between what happens on the ground, you know, to your point about rolling changes and what happens in space. There are a lot of people that are saying like, oh I'm gonna do onboard processing, but there are tremendous challenges with how much processing you can get, how much power you can generate, how you get the heat off the spacecraft, and then how do you roll updates, how do you coordinate them, what are you gonna do? Um, we do have some burning platforms like the recent push towards Golden Dome with the missile defense agency and this idea that the timelines are going to drive us to have automation, some kind of sense making from multiple sensors that is happening very, very, very, very, very quickly at scale across a constellation that maybe doesn't even know if the processing is happening on board itself. On board another space-based cloud node or on the ground, that to me is going to be the big breakthrough is instead of going I have a ground station and a spacecraft, you go actually you can't tell the difference. Like when you do something on your phone, you don't know if it's computing on the phone or if it's computing in the cloud and we're gonna have to get to that with a with a space ground operating system that just says how do I distribute that processing. The second thing is I mentioned the combining of multimodal sensor data. It's I have observations coming in, Brian, maybe some of them are from your telescopes, but maybe they're my organic sensors on board, or I'm getting tips and cues from some other system. Great when they agree, it's a math problem to put them together. When they disagree, how do you know which one to trust when the adversary is actively injecting things into each of those which they might do. If I followed along with, with Tom's uh despair comments. And then there's the thing where you go, by the way, when the SHTF, Google it, no one's going to be able to talk to anybody. Everything is gonna be jammed and you're just gonna have whatever you have, so that to me is like a series of problems that will take us through the next decade of like how do you enable these things now exciting that we're gonna have the opportunity to solve them, but I think AI gives us a chance to say, OK, don't just sprinkle AI on it, but how would AI help us with the data fusion problem? How would it help us with the multi-sensor orchestration problem? How would it help us with the decision making? And then the last thing that I want to say in this regard is like one of the most exciting things that I've read in the last couple of months was an interview. Uh with with um Frank Kendall as he was outgoing as the Air Force secretary, and I don't have the exact quote memorized, but he, he kind of said something like what I just said things are going to happen so fast you're gonna have to have AI in the loop, and that was the first senior official I've ever heard say that. I worked an IC program where they said we're always going to have a human in the loop, and I said, I understand that's a requirement. I'll do what you told me to do. Then it evolved to, well, we're gonna have a human on the loop to confirm and just watch the thing run. And by the way, that's my worst nightmare. That's my worst nightmare. Because there's this massive multi-billion dollar system that a bunch of super smart people have put together and engineered over a series of time that's all designed to work together in network centric warfare, and there's one person on Christmas Eve that is the lowest ranked person that's. Stuck with the least amount of leave that has a big red button that says turn off United States and something crazy happens on the screen and they're being injected and confused there's a lot going on. They've had too much eggnog and they just push the button and the whole thing turns off and you go like. And that's how the war ends. It's like the next book I'm writing, Jim spoiler alert. Or if there's anyone from Netflix in the audience that would like, that was the pitch. OK, wow. OK. Any, any final thoughts on that? We're, we're gonna, I wanna pilot that just real quickly, um, so for a space domain awareness, uh, just a simple model, right? If you're one sensor, whether you're on the ground or you're in space and you're watching another object in an orbit. Your goal is to understand that object's orbit. The thing you need to have is geometric diversity before you can converge that orbit. What does that mean? It means I have to wait for that object to move enough for me to be confident in being able to converge its position and velocity vector and how they change. I don't have time to trade in the way we're talking about this problem, which means it's better to have more than one sensor looking for multiple places or multiple modalities, because I don't need to give up that all precious time. In the problem. I need that left to still think about it or make a decision or transmit to cue somebody else so favoring architectures that combine that off board sensor data with on board sensor data is where we're going to need to go, whether that's civil, IC, uh, DOD or Space Force or what have you, which means we're all going to be talking to each other in order to collaboratively navigate the evolving hazard and threat population that is now in space. Which is a paradigm shift. It used to be, you could design your own ground segment, your link segment, your space segment, and run your system pun intended, as though it were in a vacuum. That those days are over. You are a part of an ecosystem of collaborative space systems that must navigate the domain the same way you would do inland sea aerospace, yeah, and to follow up on that and to Pat's point is. Uh, I think one of the, uh, myths about satellite operations, spacecraft operations, mission management is it's about like the satellite, but it's about the ground network, it's about the cyber, it's about the the ISPs, it's about the software that then is like in control of all of that and treating those as another node or an asset will help find which route should I take in order to task and. That could be across domains, you know that could be just finding a different antenna that could be you know finally using cross links when they come to be. But in the end these are just TCP IP addresses and you're creating this virtual mesh network of how do I route myself around and you know that's where our vision is that satellites are flying servers in space and so think of, you know, Netflix thousands and thousands. Thousands of AWS servers, they have like a handful of people on call, and that is just to keep them up and running. That's not the people who have deployed the Zoom application that make this conversation work you know that's where enabling operator guardians to focus on the payload management while mission management is focusing on how. Or do we connect these nodes and find a path, and they don't care about what ISPs you know this is going over what data center, you know, antenna this is going through if there's any fail failovers like that's our job, but what we have to present to the guardians and to our users is the up time so that they can, you know, communicate to the to their end user uh what what the uh what the objective is. Yeah, and Brian, I, I very much agree with your point about geometric diversity and those timelines. I mean, not to sound like too much of a fanboy, but the network that you've built is, I think, from a nerd standpoint, a wonder of the world. The, the fact that you can get all that data from those diverse perspectives, um. Honestly, I don't know how you came up with the idea to be able to deploy that, but it provides such a unique capability that I think if you try to, if you tried to pitch that to someone as like, I want a contract to go do this, they would be like, you're nuts, you're not gonna be able to put those telescopes in all those places and you just did it. So that is that then I think causes people to think a different way that says if I can route this information and combine it with other things that I have, I can do unexpected things and I think that's very important when you take what you've done with observations and processing and then you combine that with some of these other types of capabilities like a proliferated communication network like Starlink, then you would go, OK, let me play some trade games about what if I had a couple of these things plus a bunch of little classified toys that nobody knows about. And so that is a different paradigm than the way that the government has always tried to acquire systems. It says like, I gotta do it all myself and commercials like sometimes also there. So uh another enabler, Jim, the next couple of years here might be. How those public private partnerships, you know, the same way we got Bob and Doug to the space station that's a hard thing to say, public private partnerships without embarrassing yourself, how those uh those types of constructs could be used to give us resiliency because I think the adversaries are going to. Uh, go after the nodes that they know we have, and we know that they know we have them and so you always gotta have a backup plan in your pocket and I think one of the big solutions there is the kind of collaboration that we've been talking about that says I need geometric diversity. I also need network path diversity because I do believe that there's gonna be a major comms blackout SHTF but you go, I got a backup plan that can operate through that and that I think is gonna be the way that we're gonna enable success. OK, thank you, um. So let's touch on the final closing topic of trust, and specifically trust when it comes to space control decisions and does a human need to be on the loop, in the loop? Sounds like you want them out of the room. Um, but no, seriously, right? So for the most vital functions and protecting the, the, the capabilities enabled by space for the US. What does that look like? Is a person required? Is a person a hindrance? What does that look like in the decision space that may have to be made in the near future? There's gonna be some functions where. You're gonna have to take people out of the loop, and that has become a less controversial view. It's a view I've been afraid to admit for a long time because it was very unpopular, but like the self-driving cars will work when you take all the people off the road. It's not that they can't, it's not that the Teslas can't drive, it's that they don't know how to react to the people that are unpredictable. But if the Tesla could say, I need to take that exit and I want to be in that lane. In 6 minutes, can everybody make me a hole because I, uh, I've got a lady with a baby and, you know, having a baby in the back seat. I'm trying to get her to the hospital. So those are things that the machines can figure out. I mean, we teach teenagers how to drive. They're not very good at it, but the, the computers are certainly way better, so. We are gonna have to identify some functions and I, I don't think it's like firing nuclear missiles. Probably not. There'll be some things where there's a person loop for policy and even for our comfort, but as a society we're gonna have to get used to. Some things are done in a completely automated way. Like, do you yell at traffic lights? I know they're not AI, but you go like, Oh, I can't believe that just changed. Can maybe yell because they won't change, but like there's a controller in there that's making the lights change, but it's very rare that you just see two cars barrel into each other. We have, because the lights were both green, right? It's not people over there flipping switches. It was, right? It was. A really long time ago when the first traffic lights were started, they were people, right? And then they were traffic directors and such and, and, and I mean stop signs were invented at some point it was like, oh, you know, we ought to put some kind of guard rail here when two roads intersect with the first time they had two roads. So we're gonna have to think about the functions that become automated and just let it go and just we're gonna have to be comfortable with letting that go, yeah, maybe, and I don't know a lot about the exact disciplines of space control and maybe Nate, you've got some thoughts on that. Um, but some parts of it we're gonna have to say are automated for for keeping that asset up and running, you shouldn't have a person in the room for making certain decisions, yeah, for, you know, detecting is this, uh, you know, an asteroid or is this a nuclear missile and are we under attack and what should we do? Might want to have some guard rails in place for that, but you know that just goes back to why are people in the room today and understanding what is the problem that they are solving, you know, why is that solution there? When was it implemented sometimes even just saying the background of why is this why is this person here and what what the job that they're doing, you know, what can we do to use good physics, what can we do to use ML to use automation to use artificial intelligence whatever the solution really is. Um, but a lot of this is just to, you know, day to day daily tasks is how do we keep this asset up and running? How do we make sure that the antennas that we're using, which are, you know, very archaic and not as automated as they need to be on a lot of different infrastructures, is, um, you know, how can we automate those processes and why that person is there, um, and that just gives us more time back that gives us more time back to understand especially. As we proliferate is, you know, how does this architecture look, um, what is the mission, what are the mission objectives, um, and when it comes to some of those critical tasks is what are the guard rails that are set in place and what can we automate out so I think especially when it comes to the up time of the bus on the spacecraft side and the up time of the ground system like you need high availability and the high availability is not having two people or multiple people. If a critical war fighting decision could be analogized to playing a single game of chess. And you have to win. How do we make sure that every human that ends up on the loop is at least as good as Magnus Carlsen? How do we get to the point? Do you want the AI playing that game for you? Uh, if this is an AI conference, so you've probably followed some of the progression for like Mero and AlphaGo and how they won those bigger games for the things that we can bound as closed games or things that we understand well enough that we can trust that the AI is performing at least as well as the Magnus Carlsen for this chess game that we can train. Then you might be willing to trust that system well enough, but we need to rapidly get to the point where whether we're using that human or AI is making that decision that it's implemented because again it's the speed and the scale that's gonna drive the policy of this. We need to be able to understand this problem and to be deriving policies because we've seen this conflict before if nothing else in modeling and simulation and everything else that we can say about it before this emergent behavior occurs. And then you make the best decisions that you can. But it's it shouldn't be out of pride or it shouldn't be out of this is the way we've always done it, right? If you have available compute and it's your supercomputer against theirs and it's down to this one game, this one match, I want the one that's seen more possible games, more possible moves, and is making the wisest possible decision either for near term prior prioritization of goals or for strategic ones, and that needs to be a dialogue. With the folks that are making that decision. That is not our commercial company's responsibility. We can inform on the technology and how we might be able to get there and support in the roles that we do, but that is the dialogue that I think needs to be had. And if we assume that we have the convenience of time, I think that's a strategy for failure. Um, so we're nearing the end of our time. Any final, any save rounds here? Did we bring you back from the edge, Jim? You started as a skeptic, well. I heard I may be getting obsolete, but I think physics is still important, so I'm gonna rest on that truth, Jim, we're not, you're not obsolete. China has 10 of you and we, and, and, and we need 1000 to keep up with the way the problem's growing. Thanks for making me feel better, Brian. Jim, we should try to make Jim GPT and see what it comes up with. That's it. We should train on your posts and we should say, hey, this vehicle just moved in space. What would Jim think it's doing? Let's see how close he gets. How long do you have? Yeah, yeah, but we could do a fly off. We could do the chess game, right? We could have real Jim Shell versus virtual gym shell. We could sell tickets. We could pay for the whole thing by selling tickets, I think. All right, well, gentlemen, thank you and uh let's give our panelists a round of applause please. excuse me. Uh, so thank you so much Jim, Brian, Nate, and Pat. Those were, uh, great antidotes, uh, great stories, great conversations around, um, those topics and one of the things, you know, kind of bringing it to, uh, a little reality of some of the things that Pat brought up. Um, I took a note here when he brought up the World Wide Web. I remember early in MySpace days, uh, we, we would deliver, you know, an entire stack, uh, all the way to the warfighter, and I, I had the honor of doing that, uh, early in my career. Uh, there was a tasking system. Um, that was produced and we developed that tasking system and we, we put that tasking server all the way down into Humvees, right? So, uh, and then when we updated the software, we updated the hardware with it and we went down range again and we would, we would take those things out. But when I first started my career, one of the things that happened, you know, uh, 9/11. Um, so I got to start my career right when doing a, you know, active, you know, theater, action, and uh we were in war, uh, at that time. And I remember, uh, one of the times I went down, uh, I visited the eighty-Second Airborne and I was, I was going down to, uh, replace a server. And they had just got back from theater and they were, they were willing this Humvee out and you know, I went with my, I had a big, you know, notebook of all the things that I needed to do to the server and before I got there, a young captain pulled me to the back and said, hey, let me show you something real quick. One of the things that he did was, uh, he took a leaf blower and uh he blew a lot of sand in my face and he said, you know what, this server is taking up too much space. He said we need to do something else with the space in this Humvee. Can you do something different with this server? We took that information back and it was the advent of putting tasking on the on the worldwide web or or a closed network that those warfighters could get access to and and we could take that server away. Now they could go to a website and get access to tasking. And then from so forth and so on, time just got better, the capabilities got better, and now we're at the advent where we're talking about AI and and how can that AI enable and continue to mature that in use of the warfighter, um, because I learned from that experience that technology, um, and we had a statement that, you know, if you're not ahead of the game, the game will catch up with you and then you're gonna have to make a decision in the game. So, you know, I look at these opportunities that we can make decisions about AI, the conversations that we're having today, uh, it's been truly enlightening, um, and I've seen this firsthand throughout my career. So great job to that last, uh, panelists, uh, truly engaging conversation.Â