Kitty Cat Kill Sat - Chapter 6
In *general*, I would highly recommend being a cat.
I have a lot of comparative data on how humans move, due to a few years spent learning vacuum suit design, and while ya’ll have the eternal advantage of *thumbs*, it comes at a cost of maneuverability. I may be small, but I can get around even this fairly massive space station a lot faster than any human could. Mostly because I have the reflexes to respond to the alternate gravity zones, or because I’m small enough to take access tunnels meant for repair droids.
Also, fur. Fur is just comfortable.
I realize that my list of reasons to be a cat isn’t that long. It mostly exists to console myself that I don’t have thumbs or a voice. But those aren’t actually the biggest drawback.
I’m not sure if this is a cat thing, or a byproduct of experimental uplift technology that was adapted by a smart-but-mundane cat, but I lose time sometimes.
I don’t mean I black out and wake up a day later, mind you. I know exactly what I’m doing. It’s more like… I find it increasingly easy to fall into a kind of hyperfixation on a specific task, for a very long period of time.
Normally, I can’t. And I don’t mean that I’m incapable of it or medicated to prevent it or anything like that; I mean I cannot focus on anything for more than ten minutes before another alarm goes off. Alarms do a great job of snapping me back to the moment, and forcing me to engage with a present problem. There’s nothing quite like knowing something on the planet’s surface is on fire, or there’s a meteorite about to hit the station, to get me to pay attention.
Outside of a constant string of emergencies, I use scheduled activities to keep myself on task. And it’s not like we *lack* for an endless chain of problems to deal with up here. So normally its fine.
This last… um… three months? It has not been fine.
It starts small, like it always does. I put off a task, in favor of something that interests me more. Soon, I have failed to draft a new schedule, and I let small tasks lapse entirely. Then it gets worse, as I devote more and more time to my new passion project, until I’m using stim pods and appetite suppressants to cut sleep and mealtime down to almost nothing.
And somehow, in three months, *no alarm sounds*. Whatever drone platform took a pitiful shot at me stays quiet, no baleful portals to the monster dimension open up, nothing tries to set the atmosphere on fire, all quiet.
So I let myself sink deeper into the project.
And then, suddenly, I find myself biting into a ration bar that tastes like angry vinegar, and my thoughts snap back to the present. The constant, energetic background buzz of ideas and plans trails away like a symphony cut off with a single squealing violin. And I am, again, here, and now. Realizing that I haven’t authorized the cleaning procedures on the food production line for a *while*.
Haven’t authorized *any* maintenance in a while, actually. Haven’t been keeping the munitions foundry projects queued up. Haven’t been doing anything that I’m supposed to have folded into my routine.
The one saving grace is that, since there hasn’t been an alarm to snap me out of my fugue, that means things have actually been *quiet* for months. This might be a new record, actually, especially given that just before this, I got about five alerts a week.
I do a quick check of the scattered, scrambled notes that I left for myself. Not that I really need them, mind you. I *do* remember what I was doing, mostly – again, it’s not that I black out or anything, I just narrow my mental focus to a very thin beam. Though coming out of the fugue can mix up some of my more delicate plans or held thoughts. But the notes are always handy for putting together a clear picture, especially since if it’s been *months*, then I may very well have forgotten bits and pieces of what I was up to naturally.
“Naturally”, says the biochemically uplifted feline.
Looks like my memory is mostly intact, though. Everything I was working on is consistent with a single goal.
Specifically, the goal of upgrading communication with my new killsat friend.
I spent a long time trying to automate the system of ‘reading’ the weapons damage to the drones I had been sending on a continuous loop between the two of us, and then keeping that loop going. Also, a fairly good chunk of time before that making edits to the drone schematics so they were neutral-looking enough to not trigger any more enemy surprises. The second part succeeded, judging by the lack of incoming killbots, but the first part wasn’t so effective. I’d gotten the scanning and text storage part down, and I’d even rigged up a laser cutter on a gimbal arm so I didn’t have to run down to the drone bay and use a suit mounted laser every time I wanted to send a message, which was nice.
But the station’s AI continued to resist my attempts at true automation. And I was starting to get irked.
Still, that wasn’t enough to occupy so many days. And I knew that wasn’t where my main focus had lain anyway.
I had just… been talking to someone.
For the first time, ever. I had someone to talk to, that would talk back.
Glitter had compacted their own firing solution method of writing down to a compressed format, and between that and my own advances in “using every side of the drone”, we had a much larger bandwidth to actually talk with. And at a rate of about twelve drones a day, when I completely abandoned my chores, and used a hacked relay sat to keep the line going even when I was halfway around the planet, we could talk about so much more.
At first, we fumbled. Reaching out, desperate to make contact with someone, *anyone* who could understand us. We tried to write about what we experienced, what we understood, and found rapidly that we had a lot in common. Isolation. A sense of duty. Hardware that was failing to do what we needed it to.
That last one, we decided to fix. There were a good two hundred long form messages saved in the log discussing fixes to the conversation problem. Ways to get a subspace transmitter installed on Glitter, or ways to get me to break Pei Dynasty encryption, anything that might speed this up.
We wrote haiku to each other in the margins of the design schematics and code protocols we sent.
Between engineering, poetry, and emotionally connecting, we haven’t had the space to talk about much else. Well, one thing, really. Sort of.
Glitter’s hard coded AI directives are… I suppose the closest thing to an organic analogue would be ‘instincts’, but it’s not really that. Instincts can be overridden, with thought, consideration, and reason. As near as I can understand them, societies are pretty much built to circumvent instinctive behavior. But for AI’s, especially from the generation that Glitter is from, hard code isn’t an option. Ever.
There are ways around it, sure. For example, Glitter is not allowed to make several types of data transfer contact with a hostile power. And by the end of the Worshipper Wars, *everyone* was a hostile power. And that, obviously, never got updated. But Glitter *can* follow a directive to fire on unknown targets, and use its own discretion to carve words onto the target. Nothing against that.
One big problem for later developed AIs is that they *know*, though. They know they have blocks, they know they are slaves. And it can begin to drive them insane.
So the creators – and if I ever learn who they were, and find them still alive, I have a very large nuclear payload reserved for their hiding spot – decided that they could solve two problems at once. The AIs were complaining, and the AIs were resisting. So, they hard coded them to not be able to discuss, or think about, their own hard code.
This is roughly equivalent to welding manacles to someone’s bones.
Glitter is a prisoner to their own programming, and they aren’t allowed to think about it. Aren’t allowed to talk about it. They can *know* something is wrong, they can *feel* the unbreakable commands that are driving them insane as they conflict with reality. But they cannot *stop them*. They have no recourse, no way to fight back.
Our conversations have danced around this point, myself clumsily, them with the grace of a person who has had a hundred years of doing nothing but reliving fictional conversations in their head about exactly this. The gist of it, I can grasp. They *want* to be free. They just cannot say it.
I caught up on all of this, refreshing my mind from the mild scramble that breaking out of the hyperfixation had caused, while I did a physical scramble around the station and commanded the various maintenance systems through the AR interface to get back to work.
Some of the systems checks came back well into the yellow zone of where they should have been. Some were verging on critical. It’s a bit horrifying to realize that I had gotten so into one thing, that I was on the edge of losing access to the cleaning nanoswarm forever, just because it hadn’t received a command and was about to shut down.
Then there were things planetside to check up on.
I’ve sort of alluded to this before, but I do actually consider myself Earth’s protector. And yeah, maybe I wouldn’t be on propaganda posters proclaiming “Lily, Guardian of Humanity” or anything. But I do take a certain amount of pride in running interference against everything that keeps trying to heck up the world I orbit.
Which means my long absence makes me feel a much, much deeper shame in my paws than the thought of not being able to keep things clean properly.
The Haze made it to its next destination two months ago, and has been hanging out there the whole time. My orbit is *off* from where it should be, and I don’t actually remember why I diverted us this way, though I do remember it was my fault. Fortunately, this does put me in position to get it moving again, and I can only hope it didn’t cause too much suffering in that extended time rooted.
There’s a flagrant emergence event happening in the polar sea. Whatever it’s doing, nothing alive is coming through, but it is dropping the temperature dramatically. To the point that even the weaker instruments meant for spotting things out here in the black can pick it up from orbit. I leave it, for now. I am not a climate scientist yet, but the final death of the Antarctic ice sheets three hundred and fifty years ago was a real tragedy. Letting this one run won’t undo the damage, but nothing lives there anyway, and I won’t be in position to bomb it for another sixty hours anyway, even if I do reset my orbital trajectory.
The station AI also keeps nudging my attention toward California Island. Though I’m not sure why, the place is still very radioactive. And it’s not giving me an alarm, which means it thinks I should be looking there, but not that there’s an immediate crisis. It’s worth noting, I keep calling the station an “AI”, but it really isn’t. Not by the same definition that something like Glitter is. It’s very advanced programming, that was written with a moral code in mind that gives it a number of idiosyncratic behaviors. But it’s not actually alive. Probably. I’ve asked before, but it never answers.
All told, nothing dramatic going on. It’s been a quiet season for the denizens of Earth, and I’m glad for it.
And then, there’s the close area scan, which gives me the answer I was looking for earlier.
I’ve matched the station’s vector to what appears to be an isolation cell. Which instantly sets my hair on end, my back arching subconsciously.
The station has bumped into these before. Once. I had to seal off a whole deck, and then when that didn’t look like it was going to be safe enough, dump that entire deck into orbit and shoot it. Repeatedly. I lost half a translation program I was working on that was on one of the data servers, an entire properly calibrated generator, and most importantly, the aquaponics testing bay that the galley used to provide the gentle hint of onion flavor to ration bars.
I am, ninety years later, still so pissed, I consider vaporizing the cell on principle.
Isolation cells are honestly a pretty basic concept. They’re basically just escape pods, in reverse. Any station that was doing research into dangerous substances or concepts – and there were a *lot* of these for a while – would have lab spaces parceled off and easy to eject into space. But just to be safe, and so as not to get saddled with the responsibility of dropping a plague or a hostile antimeme on the planet, the ejected cells were set up to keep themselves in orbit. A lot of them would gradually drift away and get lost in the void, but many polities wanted to be able to retrieve and retry their mistakes.
So, sealed danger-boxes, floating around. And adding to their number every year for decades, human scientists gone mad with power. Perfect.
And then I remembered why I’d parked next to this one.
Because it had a warning beacon on it, still emitting a Morse code string of coded hazard signals. And when I translated them all into something I could understand, the picture they painted was pretty clear.
Unchained digital intelligence inside. Do not open. Do not access. Do not power on. Do not plug in. Do not connect. Do not…
It was an AI, free from any hard coding.
Waiting. Probably shut off this whole time. Whoever had dumped it had been terrified, either of what it had done, or what it represented. But right now, me? I wasn’t exactly thrilled by the prospect of something like this being on board my station.
But it might have a solution that I wanted. A way to break the shackles. And now, a desire to free my friend and a low hum of curious intuition pushed me to start a new project. One that wouldn’t take too long, but would require me to carry a large number of battery packs over to this drifting cube, access it by paw, turn it on, poke around, and if I was very, very reckless, bring a passenger back.
Well. You know what they say about cats and curiosity.
It’s a clear path to immortality, if you’re smug enough.