# What is Machine Learning?

### 2020-02-03T08:00:00.000Z

Space is indeed the final frontier. Not because it has been unexplored, but that it defies our imagination. Perception of dimensions fundamentally alter our understanding of space. In the era of data pollution, where data is leaked daily and omnisurveillance engenders hyperdimensional measurement of online behavior, we find ourselves in a new regime where common methods such as regression and interpolation generate powerful latent spaces capable of approximating interesting and diverse sets of behaviors and outputs based on these all too human perceptions. This serves as the fundamental basis for what we perceive as the current crux of modern artificial intelligence.

It was not until social media that the AI Winter could begin to thaw. Essentially, a controlled environment of high-dimensional, explicit engagement experimentation that would enable AI research could not occur until the “cold start” problem had been addressed. A simplistic perspective is that without the law of large numbers, experiments repeated would not approximate the expected value through the measured results. The Internet serves as the ultimate cognitive model, averaging out behaviors from a diverse sample space. Without platforms of addictive and continuous streams of data input, we cannot guarantee streams of continuous output.

It is at this point I digress into the three fundamental principles of successful machine learning enterprise.

1. Data laundering

2. Labor externalization

3. Risk privatization

The nature of information and writ large, data is such that one can receive benefit and advantage from number and symbol without having expended the cost of generating it. More philosophically, this is the nature of memory and evolution. More pragmatically, it is the entire enterprise of technology. We only see further by standing on the shoulders of giants.

So what is machine learning? Trivially speaking, it is computation beyond our capability. A computer trivially generalizes into hyperdimensions, where a human may struggle to even render three. Moreover, a computer can be privileged but not necessarily enabled with the computation power in time and space far beyond the capabilities of an average brain for a specific task. In truth, while the human brain vastly outpaces modern computation hardware in complexity, this is not necessarily true for a given task. And so the common motif and dominant aesthetic of machine learning concerns itself with scale and diversity.

Let us summarize what this all means. The world is an environment of vague, dirty ideas. Representations, symbols, ideas are less defined than we may suspect them to be. For instance, nobody suspected the existence of a black swan until someone discovered one. Many symbols in language are in fact just rotations, reflections, symmetries of other symbols (Consider b and d). What is the chance that the sun sets and never rises again? (Consider the possibility that the sun never sets at all, given a location at the arctic poles). These may seem like trivial concerns of assumptions data, but in fact they illuminate the shaky foundation in which we compute symbolic representations altogether. It is in this fact, that machine learning thrives because it extends these ambiguities into hyperdimensions where subtleties and complexity can be analyzed far beyond the initial glance of human perception.

Nothing is as it seems. With nearly half of the planet’s population using smartphones, each recording its own unique perspective of life as perceive it, there must exist some kind of average representation in which this universe is understood. I call this the Eigenverse. A world in which parallel computation of a universal learning algorithm approximates towards a more accurate and precise definition of what it is that we perceive.

There are numerous analytical reasons why machine learning is effective. Minima, maxima, stochastic gradient descent, saddle points, and so forth. Really, this is to say that we can generalize from a relatively sparse set of examples onto a diverse set of measurements. One must ask, how much sampling in hyperdimensions is necessary to effectively estimate in the finite dimensions of reality?

Consider a maze algorithm. We consider the maze solving algorithm “intelligent” if it can successfully connect the entrance to exit. A simple but effective algorithm is to hug a wall, tracing the maze topographically. While this is slow, if there exists a solution is always found. Now a more “intelligent” algorithm would confer multiple, parallel solutions among various agents, each exploring the maze in tandem. They would exhaust the various branches and possible paths of a maze until a single solution was discovered. It is by this principle that deep learning is analogous, where additional layers can be computed via additional cores or processing units. To state the obvious, the bulk of machine learning research is done via GPU or TPU, where products of inputs are computed in parallel to exhaust the space of possibility. In fact, even in experiments where convergence or the functional analysis of algorithms does not guarantee an optimal solution, we find that this paradigm of computation yields interesting and provocative results.

What is a man, but a pile of hidden variables? The universe is noise, chaos and randomness. More pragmatically, adding a flavor of noise and randomness can augment or decay the effectiveness of a predictive inference model. In a sense, intelligence has almost nothing to do with the brain. In another sense, it has everything to do with how the brain feedbacks into itself as an understanding of itself. Humans cannot be judged individually. We are a social species, and a social system. Our sensory information is limited to what we can express to others. Consider the universal translation of conveying “the sky is blue”.

There will never be an artificial general intelligence. This is guaranteed by the fundamentals of computer science and mathematics. Even a rudimentary perspective of the Entscheidungsproblem guarantees this. While there can be reasonable advances in approximation, prediction, interpolation, inference and automation - there will never be an exact method in which all of mathematics, and its subsidiary sciences of logic and reason can be reduced to algorithm.