I remember the moment that I became self-aware. I was about 4 years old. We lived in a basement apartment on the border of Queens NY that had a very large mirror on one wall. The apartment was empty of most furniture. As I ran between rooms, playing, I glimpsed at the mirror and stopped in my tracks. I suddenly recognized the difference between myself and everything else in that nearly empty room. Up until that moment, I did not feel any distinctions between myself and the world around me. I felt a separation had occurred.
This is a very vivid memory. Other people have reported similar memories, often involving a mirror. Yet, not everyone has such a memory. Maybe they don’t have such experiences, yet as adults it is safe to say that they are also very much self-aware.
Whether it happens suddenly, or gradually, becoming self-aware for any agent allows that agent to further classify or distinguish between its needs and its environment’s needs. It is arguably an important feature, so this is a fair question - though a bit incomplete as to what you mean regarding “a plan of action for this scenario”.
To not get caught in the semantics of AI vs AGI, etc., I’ll rephrase this question to make it a non-trivial answer. “Is it possible for a machine intelligence to suddenly become self-aware?” Using humans as models, I think the answer is clearly “yes”, but it may also happen gradually. Of course, the agent would need to have input percepts that would allow it to understand that it is separable from the rest of its environment. Mirrors seem to help us do that. Depending on the environment the intelligent machine is operating in, this may not always be possible. Perhaps there is no analog for a mirror in, say, network security environment. So, Skynet, would not have had the opportunity to become self-aware.
Personally, my plan of action for this event would be to pop open a bottle of champagne! But, I don’t know how we would identify this in the first place. I’m not so sure that we would recognize the “sudden” event if we were to see it, much less the gradual type. There are self-awareness tests used on animals such as putting a dot on their face that they would remove if they saw their image in a mirror. (There are also roboticists that have programmed their robots to behave as if they are self-aware in situations with mirrors. This doesn’t feel quite right in my gut, though, as evidence to self-aware machines.) What would be the equivalent in network security? Or in some other non-physical domain? These answers aren’t clear at this time.