Ability vs Tactics – Ego Seperation
The MM-Bot has been completed and is fully operational, but has left a stale taste in my mouth. The truth of the matter is that the simulator has one of two harsh limitations. Simple simulation droids loaded with MM still get quickly overwhelmed through shear ability of many combatants on this ship, making the formulation seem a pointless waste of computation time. Difficult programs however are only difficult due to overwhelming abilities. An observation, simply put, is that ‘ability’ is something that is equally as hard as ‘tactics’ to simulate. There is a fine line between something that you learn nothing from due to simplicity, as well as something you learn nothing from due to unrealistic circumstances. What is needed, in truth, is real combatants.
Of course, as you all know, the simulator can draw figments of combatants from memory and create a surprisingly realistic experience. This comes from the fact that both ability and mindset are encompassed into one simulation of an entity – it is a simulation of the person or being. But something like the MM-Bot algorithm can’t simply be jousted into a person – over a simulation, it wouldn’t react at all. MM-Bot is, after all, a tactics interface.
Lets take an example, say myself.
My list of ‘abilities’ might include: Ability to swing a sword, throw a punch, hold a position firmly, etc.
In turn, my list of basic tactics and ego might include: Defensive while analytical, frantic when confident, careless when prideful, tend to attack from the left. Or some such.
And thus, the goal state is to replace that list of human ego with the mini-max bot’s algorithm, which clarifies the problem. We need a system whose parameters are a fully functional simulation of a person, and outputs two separate interfaces – a blank bot who only carries ability, and a personality file for tactics and ego.
This has been the leading basis for my research last week – via a brute force exhaustion method with a few inference variables. The process must be human lead, which makes it a little tedious, but the results I’ve had thus far using myself as a test subject have been more than worth the efforts.
The first step is to load a full simulation of a person or combatant. Engage in conflict. Then, upon the interactor’s judgement, the system is paused and snapshotted by calling an application “Freeze Frame” I added to the database. This Freeze Frame is a snapshot of scenario and system resource usage, essentially allowing insight into what the currently loaded character is activating in its own code. The system should be frozen in a few key points. One snap of regular, uninhibited combat, and one for as many different ’emotions’ at the interactor can draw out – when the character is humiliated, angered, grieved, saddened, pained, etc.
The final step is an intersection seperation which opens all system usage Freeze Frames, cross-referencing the methods called by the character during spurts of emotion, drawing them out. By referencing several different frames, counters ticking can analyze these resources. Once categorized, they are then fully seperated into the two seperate files of the character – ability and persona.
And tada! Now, by loading an ability-only bot, one can choose to open the archives of whatever personas are available. Many of the systems current interfaces for training work, as well as newly created interfaces such as my MM-Bot. On top of that, if lead through more crew members, we could get some very interesting interaction. For example, you could load your tactics into my ability set, or my tactics into your ability set, etc.
Think of the possibilities for self-improvement! Now that I’m nearly finished, every crew member will be able to lead through their own seperation process, and load themselves with the Minimax Bot and therefore fight an unbiased version of themselves that will constantly be looking out the outcome of its actions 10 cycles into the future. We’ll be able to get some foresight into our rash actions of prideful decisions.