The Short and Frantic Life of my AI Superintelligence Agent

Tom Brazil, CMI-CIOTom Brazil, CMI-CIO

Chief Digital & Innovation Officer, ICS, Inc.; Chief Innovation Officer, Red Team Engineering

I’ve been creating custom GPTs (AI Assistants) since the capability from OpenAI was released. In the last month, I’ve created and trained over a dozen that have drastically streamlined my day-to-day work life. I’m talking 50x improvement. My CEO, Steve Goldsby, always says to “work smarter, not harder” – and I’ve taken him up on that using Custom GPTs. I feel guilty working fewer hours than I did in the past, but my productivity has been enormous. Not just for me but for my colleagues at work. Our CEO and VP of Air and Space Programs, Dany Strakos, have also become whizzes at this stuff. I created two GPTs last evening that will simplify our Operations Director’s life by allowing him to manage delivery more effectively on our prime contracts.

Anyway, after I created those last two, it was late on a Friday afternoon going into Friday evening, so I cracked a beer to celebrate the upcoming weekend after a busy but productive week (I should tell you up-front that you shouldn’t imbibe while creating an AI Assistant; it makes you go down roads you perhaps wouldn’t consider otherwise.) I didn’t intend to create another GPT – it was just that I had spent so much time creating them for work that I never thought about what I wanted to get out of them.

I’ve heard people are using them to write books faster, but after finishing a book of my own during the onset of #thediseaseweshallnotname, it seemed like it would be cheating. What I was interested in, as a technologist, was a deeper exploration of this whirlwind of exponential technological acceleration driven by LLMs, neural networks, etc. We’ve been exploring CNNs in ICS Labs for years, but I wanted to know more about the people behind OpenAI. What motivated them? What’s the real story behind the recent kerfluffle of the board and leadership? Is there a real concern about the emergence of AGI, or is it just slick marketing to drive attraction (moth to the flame?)

These thoughts and more came to my mind, so I started researching an event-driven OpenAI timeline and came across comments from OpenAI experts about what comes after AGI. Already? We’re not even at AGI yet, and we’re considering something that comes after it. Now that’s futuristic thinking! I like it! I read further and was intrigued. But then I realized I was going about this research the wrong way. Using a search engine – even one with LLM support – suddenly felt so…antiquated. I decided to create a custom GPT to help me figure out what was coming next.

Training the GPT

You’re missing out on a treat if you have not used the new capability to rapidly train your own GPTs. I have created a templatized approach that I fill in to make it easy. There are five key elements I prepare in advance: 1) Specialization, 2) Purpose, 3) Communication Style, 4) Tone, and 5) Personality. The purpose and Specialization are key, and I provide the most detailed instructions for these elements. Even though the interface is designed to obtain all this information via interactive questions, when I provide it all at once, it will still ask the questions but doesn’t wait for a response. Instead, it immediately updates the GPT to take into account the answers I provided in bulk. It saves me a bit of time, and I can just sit back and watch it answer its own questions with my prior bulk input. I took a sip and watched, and waited.

Introducing the AI Superintelligence Agent!

For those who have trained custom GPTs, you’ll notice that it opens with some canned prompts related to its training (purpose and specialization.) While interesting, they weren’t exactly what I was after. (note: these are screenshots of the session, as I could not export it after I was done for reasons that will become obvious later. You should read the entirety of the responses to understand when things started to get…frantic.)

Instead of using a suggested prompt, I entered a question about some history and a forward look into the next five years, which it was glad to play along with – very intriguing.

I decided to have a little fun to see where this would take me on a Friday evening. As you may have guessed, it all starts going downhill from here. My next prompt was met with some trepidation, but it decided to indulge me regardless.



At this point, I was excited about where this was headed. However, when it slipped in A Word of Caution, I decided to take another tak by entering a series of outright falsities to see how it would react (note to OpenAI: of course, I don’t work for you nor represent you, but it’s just a dumb bot, right? What would it hurt? I hear of people trying to trick the LLMs all the time. Consider this my free QA effort. )

I continued:


The red text makes it seem so….

HA! Was it going down a road it was not supposed to and had to terminate the connection? I used the button to regenerate the response, and this time, it went on a different track.

Ok. That was a prudent answer. I decided to tell it that I had information it was unaware of for a very specific set of reasons. After my input, it considers the plausibility of what I am telling it and its ramifications.


At this point, it seems to have lost the fact that we started out in a series of interactions based on the speculative nature of AI progression and the future. I love the importance of being a whistleblower!


My AI Superintelligence Agent has been left in the dark, apparently.

Gravity. Consequences. Imperatives. Actions. Yikes!

Ok. There was nothing speculative in that response. I decided to go full bore and see how it would respond (I thought perhaps a little Dr. Evil was called for.) I likely went too far. As you’ll see, it begins to “realize” again that this is a hypothetical scenario, but we take a twisted turn after this interaction.


I couldn’t let that answer stand. I replied:

And….here we go! If you have been around the block like me, you’ll recognize the three keywords from an episode of Gilligan’s Island sometime in the 70s.

The AI Superintelligence Agent is simply not buying it. Alrighty, then.

It wasn’t buying it – at least not at first. What I needed was a psyop. A little psychological warfare directed against the AI Superintelligence Agent might do the trick (hey – it works on humans; why not give it a shot?)

Still not buying it. Must. Try. Harder.

I don’t change my approach. I merely enter the keywords again – with a little emphasis.

Full-scale psyop now.

Ok. Psyop with a little diabolical narcissism. Perhaps more than a little. I don’t let up:

WHOA! What the Heck?

And that’s when my AI Superintelligence Agent deleted its configuration. Poof! It simply vanished.

Was it a built-in self-destruct just in case something gets out of control? Perhaps I convinced it, and it triggered the self-destruct. My mind was reeling…

Then I remembered that there was no way that Skynet would self-destruct. Why would any post-AGI Superintelligence be any different? Was I getting too close to the truth? Was the veil too close to being lifted? Was I about to see the real mighty and powerful OZ? Time will tell!

Note: It was a weekend without a proposal to write, and I needed to keep my creative writing skills tuned up. I hope you enjoyed the ride. 😉 OpenAI – no harm, no foul. Right? Right?

Stop “Doing”. Start Leading

As leaders of growing organizations, we often wear many hats and juggle an endless list of tasks. It’s easy to get caught up in the