Enterprises are ready and willing to adopt AI agents, but trust issues remain
75% of IT decision-makers said AI is a high priority, and almost half are already adopting AI agents
Read more...Cloud computing has replaced virtualization as the new hot topic of 2008. Yet underneath the headlines a very basic shift is taking place in the network that promises even more conversations in the very near future. Let's call this shift the rise of infrastructure 2.0 or the result of escalating pressures on an already tired network infrastructure.
As Google and Amazon and others build massive cloud computing complexes (cloudplexes) and upgrade their software-as-a-service offerings, large enterprise networks are already experiencing unprecedented pressures, from scale and complexity to new availability requirements and increasing rates of change.
Yet these pressures could have a material impact on the adoption of Google, Amazon, Microsoft and other cloud-related solutions. Infrastructure 2.0 could result in unanticipated shifts of fortunes between software and infrastructure providers depending on how quickly the infrastructure issue is addressed and by whom. It promises to make many of the network hardware players, including Cisco and Juniper, relevant to this new level of software collaboration.
While many pundits have their heads in the clouds proclaiming the next big thing, there are a few issues that need to be resolved first. And those issues promise to fuel new demand for new types of networking solutions.
These new demands of scale and complexity and availability were beyond the wildest dreams of the creators of the core network services that support today's increasingly strained network infrastructure. Many of these services, like DNS and DHCP are decades old. They were created in simpler days, usually in silos and with no concept of a need for interoperability between the protocols. Those days are now gone. DHCP servers, for example, now do dynamic DNS updates.
Without new solutions and approaches to address these demands, high level initiatives, from consolidation to virtualization and even cloud computing will become increasingly costly and unmanageable. They will no doubt continue, but at a reduced pace as the payoff scenarios shrink.
The Double-edged Sword of TCP/IP
As TCP/IP proliferated over the years it continued to connect more and more users and devices, then with increasingly powerful, mission critical applications. When TCP/IP was created it was intended to support survivable networks that could quickly re-route when network nodes were lost.
While that mission was certainly met, many other key criteria, including security, were expressly off the table. And the core services that we depend on today to automate network configuration and naming (i.e. DHCP and DNS) were years away from inception because the potential scale of TCP/IP networks were at best a distant dream.
The simplicity of TCP/IP was one of the reasons it spread so quickly. Its spread drove the creation of a multitude of network solutions and/or appliance categories as its mission continued to expand bas the notion of network connectivity expanded to include literally everything.
Yet that shift in mission from the earlier, simpler days has created a unique set of risks and rewards today as enterprises ponder consolidation, virtualization and cloud computing initiatives. Those who understand these shifts and their implications will be better prepared to profit from them when they take place.
Many of you probably remember similar over-extensions within the context of other technologies being delivered into new environments and the resultant problems creating opportunities for new solutions.
There was VoIP in the late 1990s challenging older network gear and firewalls. Web-enabled enterprise application gave WANs all kinds of challenges with their "chatty protocols" and new types of endpoints. As more enterprise servers started facing the Internet all kinds of security vulnerabilities were discovered.
All of these events happened because the rate of technology adoption was moving beyond the vision of either the developer or the implementer; and entirely new technology solution markets soon appeared to address friction in the new deployments.
To put this in a perspective let me share an example of recent technology over-extension expectations and its impact on market momentum.
VMware: The Perils of Technology Over-Extension
VMware saw their fortunes rise as virtualization entered the data center. It was a sizzling IPO based on expectations of heady growth in the huge, new production data center market. VMware had assembled an impressive ecosystem of partners and announced VMsafe in Cannes in February 2008, partly in order to address some new security and management/operational issues inherent with production data center virtualization. Everything looked strong when it came out as an IPO and was empowered by a lineup of blue chip companies who were buying shares.
Yet these initiatives and high profile partners weren't enough to compensate for the over-extension of virtualization in the production data center. As we learned almost by accident, there was a sizable the gap between devtest and production deployment requirements. Virtualization security was discovered almost by accident by a handful of analysts and pundits who had trouble getting virtualization pros to pay attention. As a result, glowing market expectations ran head on into virtualization-lite.
The Microsoft Hyper-V announcement over the summer was another hole in the boat of lower expectations and delayed VMsafe partner offering follow through. The VMware ecosystem wasn't fast or powerful enough to help VMware realize heady analyst expectations set in 2007.
As VMware dutifully informed Wall Street of their reduced expectations its market cap reflected the realization that data center virtualization would require more than alliance partners and a proven virtualization platform. Growth would still be impressive, but slower than initial expectations. I think that a material portion of this adjustment was based on new requirements for virtualization for the production data center. One of those was virtualization security.
No doubt VMware will address this and other new challenges related to the data center, including winning over converts unfamiliar with the technology benefits and earning "production" credibility with the market. These challenges also promise to power new solutions and new players, in the same way that similar shifts have created other new categories.
When web-enabled applications started failing on wide area networks we saw server load balancers being replaced by more sophisticated application front ends that included specialized, layer 4-7 application delivery capabilities. Those new solutions fueled more than a billion dollars in acquisitions by leading network players who understood the problems enterprises were facing managing protocols unintended for larger, more dispersed networks that they were eventually serving.
Then there is the recent cloud computing hoopla. I predict that it will be more of the same, despite spending, buzz and vast repositioning exercises by large and small companies trying to escape the gravity of slow spending growth.
Cloud Computing: More of the Same
Cloud computing has similar gaps to address before it becomes economically compelling. At this point cloudplexes are being built in areas with low cost electricity, cheap real estate, favorable tax structures and/or heavy duty network access. Some cloudplexes may eventually deliver robust enterprise applications; others may just be a part of legacy operational initiatives.
Like virtualization, the promise of cloud computing is bound to be tested by the challenges of unintended consequences, including these new infrastructure demands already pressuring large enterprise networks.
For example, individuals will be hesitant to have their core applications and databases hosted by a third party, for obvious security and availability reasons. Note Amazon's recent cloud stumbles reported by PCWorld.
However, a major objection to cloud computing is the performance and availability of the services. If something fails in the vendor's data center, there is little for customers to do but sit and wait for a solution, while fielding end-user complaints. - Juan Carlos Perez, IDG News Service.
Current Diseconomies of Scale will drive Infrastructure 2.0
For many organizations the network is defining the limits and potentials of the company. Yet the combination of even critical and powerful applications with the accelerated spread of TCP/IP puts the network in a more precarious position. With scale and complexity and availability demands come added operational burdens. Most IT teams are operating with minimal budget increases; yet the costs for managing each IP address are going up as they add more endpoints, not going down as one might expect.
That raises the question of whether or not a network can scale to such heights in order to support such powerful applications, while maintaining acceptable availability and managing higher rates of change. If large enterprises are already experiencing rising costs per IP address, then it might take more than cheap electricity and real estate to make cloud computing commercially viable.
We could see the same expectations readjustment in cloud computing as experienced earlier this year in virtualization.
While Oracle's Ellison jokes about cloud computing being the latest tech fashion and Google and Amazon invest in massive cloudplexes everyone has quietly assumed that the network will just continue to become more complex, more available and much larger. That's an even bigger assumption than the one made by people predicting the rapid virtualization of the data center, despite virtsec friction (and other) issues.
InfoWorld just predicted at least a partial shift to cloud computing in five years:
My main prediction is that the high cost of power and space is going to force the IT world to look at cloud services, with a shift to computing as a cloud resource occurring in the next five years. - Brian Chee, InfoWorld
Yet can Google, Amazon and others effectively monetize cloud computing without a fundamental shift in infrastructure and how it's managed? Or could we seem the same rise and fall of valuations in their stocks as growth is factored in and then out?
Perhaps the real benefactors of cloud computing will be the network hardware manufacturers who seize the opportunity to monetize the build out of Infrastructure 2.0. As the branch office expansion was a boon for network gear players addressing the proliferation of TCP/IP interconnectivity and the challenges of various protocols and applications, the infrastructure 2.0 expansion would emerge out of the continued proliferation and even more challenging new demands.
Before the shift to cloud computing it is inevitable that we'll see a shift to infrastructure 2.0 As costs per IP address continue to increase as networks scale and availability pressures mount, you can expect more large enterprises to embrace infrastructure 2.0. Those who don't will be trapped in scenarios requiring more efforts to produce fewer results with higher profile consequences.
In a couple of weeks I'll be posting the results of a yet-to-be published Computerworld research report covering IP address management costs correlated with network size at Archimedius and the Infoblox Library.
You can read my disclaimer at: About ARCHIMEDIUS.
(Source: Itstrategyblog.com)
75% of IT decision-makers said AI is a high priority, and almost half are already adopting AI agents
Read more...Provided by the Louis V. Gerstner, Jr. family, clinicians will be allowed to pursue AI projects
Read more...That includes establishing teams to work together on informing future AI policy
Read more...