A couple of months ago, I presented a talk at the Benemerita Universidad Autonoma de Puebla’s Physics Institute (IFUAP), the topic being the Generalized Holographic Theory developed by Nassim Haramein. The invitation was serendipity; Haramein’s Holographic model prediction of the proton muonic radius – within experimental precision – had just been confirmed by the latest electronic hydrogen measurements from Bezginov et al. 2019. These measurements also confirmed that the standard model is off by 4%, way below experimental certitude.
I wasn’t sure which frame would fit best as an introduction to the subject for the presentation. Should I start with black hole thermodynamics, or the holographic principle? Despite the pertinence of both these topics, neither quite captured the dimension of the concern I had. The unified model demonstrates scaling from the Planck – and even smaller – up to the universal size, a fact which by itself is of ultimate importance! But why should this matter to me, and to you? Where does it lead? So instead of starting with these topics, I decided to begin with the topic of artificial intelligence and how it relates to the unified model. Could it be a coincidence that the unified model is running parallel to artificial intelligence, while the two haven’t yet “met”?
The word holographic could imply reality to be a holographic projection, as explained here and represented in Figure 1 below. The holographic principle states that the entropy of any mass is proportional to its surface area and not its volume; volume itself could be illusory and the universe a hologram which is isomorphic to the information imprinted on the surface of its boundary. Since entropy and information are equivalent, the holographic principle based only on the surface information suggests that the universe could be projected from a two-dimensional information structure on the cosmological horizon, just like volume expresses in a 3D film originated from a flat 2D screen.
Fig. 1: The holographic principle schematized in https://steemit.com/science/@etherealcreation/do-we-live-in-matrix-holographic-universe-theory.
Taking this concern further, it may also lead to the conclusion that we live in a simulation, which is something Elon Musk and other technological and scientific authorities say is highly probable.
It is important to note that such conclusions are not derivable from Haramein’s holographic approach because his model doesn’t reduce to information at the surface. Instead, reality is explained as the dynamic between the information confined in the volume of a bounded system and the information that it can effectively exchange with its surroundings and, therefore, expresses as mass. Mass in this context is the unfolded portion of the whole information contained within such a bounded system, which is intricated or interconnected to the information contained inside and outside its boundaries. We could say that mass is the balanced state of an information transfer inertia resulting from a bounded volume and the impossibility of expressing all the information enfolded within. Figure 2 depicts the idea of volume information voxelated with bits or Planck Spherical Units (oscillation at the Planck scale), and the surface-to-volume ratio derived from it.
Fig. 2: The tessellation of PSU in surface and volume of a sphere of radius r where rl is the Planck length/2.
The holographic nature in Haramein’s model arises because the volume-to-surface ratio (R/η in Figure 2), when brought to mass units, gives the exact numerical value of the Schwarzschild solution of a black hole. We call it the holographic gravitational mass of the object. Surprisingly, when applied to the proton, this ratio amounts to the mass of the universe (taken as the sum of all other protons in the universe). This means that the information of all other protons in the universe is inside each proton! Hence, the term holographic applies: the information of the whole Universe is contained in each fundamental unit, the proton. By the way, this would also explain the otherwise unexplained stability of the proton. Why do we obtain a much smaller mass of the proton in experiments? This is because we measure the rest mass of the proton, which is that part of the information-energy resulting from the surface-to-volume entropy (η/R). (By the term “information” we mean the fundamental bits of energy, i.e., the Planck Spherical Units). The holographic gravitational mass and the rest mass are inversely proportional. This extremely profound and simple principle shows that mass is a real thing! More details here.
A holographic principle concerning only the surface information brings a second concern: quantum computing and artificial intelligence (mainly artificial neural networks) are reaching capabilities through deep machine learning such that they could soon replace our theoretical models. The main reason for this is the fact that neural network technologies for information processing can naturally achieve something standard theories have a rough time with: expressing emergent properties. Emergent properties are those new properties that appear in a collective of individuals which are not necessarily present in any integrant of the collective. Such is the case addressed by many complex systems. Even consciousness could be an example of an emergent property.
It wouldn’t be outrageous to imagine the holographic principle and AI could combine to give rise to a simulated reality. It would be a very plausible conclusion to come to… that is, if Nassim Haramein had not found the generalized holographic solution. Why so? AI and artificial neural networks are exceptionally good at correlating variables, but at a very high price… we gain information on the what at the expense of the how and why. It becomes a black box because it reveals and weighs correlations between variables, but the process explaining how and why those variables are correlated is highly nonlinear and gets imprinted in the neural network. It becomes the network itself. Software and hardware mix; they are no longer separate entities. They too get correlated and become a complex system. The other way around also holds true: the science of complex systems is increasingly being approached through machine learning and neural networks. And given the difficulty and almost ‘impossible to solve’ complexity of our current physical models – for instance, the standard model of particles – it wouldn’t come as a surprise that we started doing AI in order to fill in the gaps and repair the inconsistencies in our theories. In our quest for more precise results, we would lose richness in the theoretical modeling.
The understanding or awareness of a deeper sense of the nature of reality, and our role in it, could be jeopardized at the risk of becoming just gamers. Haven’t you had the feeling that we are progressively distracted by mobile phones and social medias? How many among us already feel we are creators of a reality we cannot take further responsibility for? How can I be responsible for myself if “myself” has no meaning beyond a collection of correlations where the order guiding the apparent chaos remains hidden? And even more importantly, why bother with any ethical issue if we live in a simulated reality? Why should I take as real something that isn’t? How can I become a responsible creator under such circumstances? This is a HUGE DEAL.
Image taken from https://www.pinterest.com/pin/240590805082450176/
What triggered my concern in this topic? Most of my physicist colleagues have turned to Data Science. We – myself included at some period of my career – have survived a voracious system where postdocs are in precarious labor conditions, and we have survived thanks to Data Science. Given the limited and decreasing positions available worldwide, subjected to a competition having more to do with relations and influences than merits, Data Science has become the best alternative for researchers. Data processing and the information obtained from it are the new gold mine. Most companies will have their own data processing department: it’s inevitable. The labor market has increased exponentially for scientists willing to make the turn into Data. But what about the opposite direction – the data or information coming from traditional science and research?
Companies are turning into the science of data, with little interest in the data coming from scientific models.
This gap increases the handicap when renowned universities offer new specific careers for the field of Data Science and Information Processing, to which more and more students are turning. The message couldn’t be clearer: if fundamental science doesn’t recover its meaning – its reason to be – it will perish and be replaced by the meaningless, but very lucrative, Data Science field.
Considering all these philosophical issues, I decided to name my talk: Holographic Theory and/or Data Science. And it was a good choice! The audience was sensible to the approach given to the whole topic; the topic within the topic. Surprisingly, or maybe not, many of the colleagues attending the conference already had similar concerns regarding data science… we share the feeling we are about to become dinosaurs unless we become eager programmers in machine learning and data processing.
It’s undeniable that data processing and analytics provide crucial information. Big Data – the field dealing with memory storage of huge amounts of data – such as the one recorded by scientific projects like the Event Horizon Telescope and CERN experiments require these tools in order to store and process their data. Neural networks and machine learning are extremely powerful in any field one can think of, going from designing of new medicaments up to the design of an entire city and its different features (buildings, transportation, amenities, schedules, etc). Particularly, these tools have become indispensable in complex system fields – such as in social or biological sciences – given the lack of models and the need to reveal the unknown relations between variables.
Machine learning and artificial neural networks are precious allies for providing and expanding information, but they shouldn’t replace the standard ways of reasoning and discerning reality. Maybe machine learning could develop a mind of its own, but that doesn’t mean it has replaced our own mind. They are complementary ways of processing data and creating information.
It may seem very subtle, but this has far reaching consequences. If it wasn’t for the Generalized Holographic Theory, our future wouldn’t look so bright… a probable scenario would be stagnation in dark mass and dark energy obscurity combined with AI simulations kidnapping reality. Fortunately, this powerful learning tool can be extremely useful in the context of the holographic model, to help us connect to the reality.
The reductionist conclusion that we live in a projection from 2D to 3D is radically different from the conclusion of the unified view embedded in Nassim Haramein’s holographic model, which beautifully condenses in the phrase, “Everything is connected.”
RSF in perspective
If you are driven by the meaningful concepts shaping our amazing universe, birthing the infinite diversity of phenomena around us and in ourselves, then the Generalized Holographic Theory is here for you!
By Ines Urdaneta, Research Scientist at Resonance Science Foundation