Cisco’s Pete Johnson looks at the history of OSI that predates the cloud and answers the important question: Is OSI still relevant and how can we redefine it for the cloud?
Imagine a world in which email traffic requires different cabling than web traffic. In this world, every router and switch in any given building would have to come from the same vendor because components cannot be mixed and matched. Applications from Microsoft would have different addressing schemes and character encoding than those from SAP or Oracle, and every mobile phone in the world would have to manually toggle between Wi-Fi and LTE dozens of times per day.
This is what life would be like without the Open Systems Interconnection model, better known as the OSI model.
The question is, is OSI still relevant in a world dominated by discussions of cloud computing?
The 1970s were a particularly innovative time in the history of computing, with projects all over the world examining how computers could talk to each other over wires. Like any complex problem, this was best tackled by breaking it up into smaller pieces. The International Organization of Standardization and the International Telegraph and Telephone Consultative Committee each developed similar but separate approaches to organizing it into layers, and by 1983 the two approaches were merged to form the modern OSI model:
A particularly important feature of the model was the separation of concerns, meaning that each layer did not need to know the inner workings of the other layers. This led to further standard protocols for communicating between the layers and a market of devices that were more easily compatible with one another.
Is OSI Still Relevant in a Cloud World?
Understanding OSI’s modern relevancy requires an examination of why cloud computing has become so popular. Cost tends to get the headlines when discussing cloud benefits, but that isn’t the driving factor. When you dig a little deeper, it’s because the defining characteristic of software is that it is soft, and that most ideas are bad. What does that mean?
Think about the flip phone versus the smart phone. Suppose you wanted to change the color of a numeric key for dialing green. In the case of a flip phone, that key is a physical piece of molded plastic, and to change it on new models is a weeks-long process involving manufacturing adjustments. We won’t even mention the near impossibility of changing it on phones that have already been shipped.
With a smart phone, though, that button color could be changed in a matter of hours through software, and millions of phones in the field could have the change made within days—because software is soft. What the business world has figured out is that it’s easier to implement change using software than hardware.
If you accept the notion that 90 percent of ideas are bad (which is the rule of thumb that the venture capital community uses), then 10 percent of them are good—and they’ll stick with your customers who generate revenue for you. By extension, it’s better to implement change quickly so you get more rolls of that one-in-10 dice per year, and therefore more chances at finding an innovation that generates more revenue.
As a result, in 2017, every company is a software company, and in the Agile mindset that application developers live by, consuming compute resources on demand is the norm. That is what really drives cloud adoption, because that consumption model of only paying for resources when you need them, along with the ability to get them whenever you want, aligns better with releasing new software weekly—if not daily—as developers desperately try to find the 10 percent of good ideas they get asked to implement.
OSI is what makes this possible, because application developers operating at Layer 7 simply need IP addresses and port numbers for their components to talk to each other and, by design, they trust that other layers of the model are being implemented faithfully by network engineers. By placing this trust in lower levels of the model and the professionals responsible for them, application developers can focus on their iterations instead of worrying about underlying network details. While developers tend to not have patience for processes that slow them down, limiting the speed of their iterations, they actually welcome the idea that network implementation is out of their problem scope.
The network engineer, whose problem that is, now has to worry about VPN tunneling at layers 2 and 3 so that communication between public cloud and private infrastructure is secure. But as long as an IP address and port number is provided up the chain for Layer 7 to consume, the application developer is none the wiser. The division of concerns remains just as valid today as it was 30 years ago. In fact, it enables the unprecedented innovation that we have all benefited from at Layer 7 in recent years.
In today’s cloud-centric world, the OSI model is not only relevant, it’s necessary. Without it, the scope that developers are responsible for would grow to the point of slowing down iterations, to the detriment of innovation. With OSI’s separation of concerns, experts at each layer can focus on specific efficiencies and enable the speed that Layer 7 has come to require as the search for innovative revenue expansion plays itself out with Agile software development methodologies.