SOA As A Distributed Neural Network

As part of a casual conversation a few weeks ago I started thinking about how I’d go about programming a large scale distributed AI. In the scenario being discussed the AI would be running the combat simulations for a starship. This was one of these, “How would you create this piece of SciFi tech?” conversations.

Now let me start with the caveat that I’m not an AI specialist. I toyed around in the space some years ago, but there are certainly those in the world who know much more about this than I. In fact what I’m going to talk about has almost certainly been detailed out in someones masters or doctoral thesis, and I have no illusions that this is anything original.

So with all of the disclaimers out of the way, lets jump in and take a look at what this type of SOA/NN AI might look like. This will be at a high level and really talk more about commonalities in order to intertwine the two concepts. Afterwards I’ll delve into some possibilities of why we might want to create such a beast.

I think that the number one thing that even makes this a possibility is the autonomous principle of SOA. Each node of a SOA should encapsulate all of it’s logic and be call-able for a single function/purpose. This simple concept can be seen in the node of a neural network. In a neural network a single node takes an input, analyzes the data and passes on an output to the next node. Once the output is passed on, the node is completed. Now there are some differences. For example, it’s generally expected that a node in the higher levels of a SOA may need to call into the lower level services in order to complete it’s function. Think about a piece of business logic that needs to call down into the application layers of one or more applications in order to get the data needed for the operation. This isn’t the norm for a neural network node, which instead expects that all data needed for the operation be passed in as inputs. While there are obviously other differences let’s look as one possible way to bridge these two concepts.

Now in order to bridge some of the gaps between the two, what if we created a series of services that lived behind our normal SOA services? This extra dimension of our SOA would serve to facilitate more organic processing. In practice this is just a special implementation of the synapses in an artificial Neural Network. The purpose of which is to use a set of weighted parameters in order to pass the outputs of one service (node) to the next in the chain. Now, at first I had thought of this as just a more detailed orchestration layer, since one of it’s major jobs would be to usher the inquiry through the process, but then I realized that it didn’t really act the same as a SOA orchestration. An orchestration sits above the worker layers of the SOA, and calls down into it. What I needed was something that filled the gaps between the lower level services.

Another factor that would probably be needed, are some specialty services that might wrap around normal SOA services in order to plug them into our synapses properly. Think of it as a hidden layer of dark matter that allows our universe to balance. In fact that’s exactly what I’m going to call it, the “dark matter layer.”

Now at this point hopefully I’ve painted a picture that allows you to accept the premise of an artificial Neural Network that can be based on an existing SOA. I may explore more details of the technology in future posts, but for now I’d like to talk about why we’d ever consider even doing this.

So, we have our existing SOA, and we’ve setup a framework to attach an artificial Neural Network onto it. What could we do with something like this?

The most common use might be to create predictive models based on real time data derived from the corporate SOA. Using AI for predictive modeling is not a new concept by any means. However, these are traditionally programs the chunk through extremely large amounts of data stored in a data-warehouses, and are designed to track trends over longer periods of time such as months and years. What if instead we looked for predictions of what a workload might look like over the next few hours, based on realtime data. An AI routine like this might be running in the background all the time, constantly updating the predictive model.

With the above node in place we might then create another AI system that would be a consumer of this model ,and would take some form of action based on it’s results. For example a complex routing system might utilize the predictive model in order to determine the most efficient way to route work. Now instead of a simplistic First In First Out (FIFO) work queue, we now have a much more dynamic queue, that will assign work based on the predictive model and perhaps job priority. The assessment of job priority might in and of itself be a neural network that is basing it’s judgements on realtime data.

While this might not be a good approach for all situations, it is a viable way of thinking about enterprise systems and SOA’s for any company that wants to not only be able to react to changing situations quickly, but wants to have an infrastructure that can react for them.

Thanks for indulging on this one, and as always, keep thinking malignant thoughts.