Return to Table of Contents

Simply Tech

Virtually exhilarating

What virtual technology means to agents and brokers

By Nabeel Sayegh


While the contemporary landscape of the insurance industry is populated with modern technology and niche markets that were difficult to imagine even a decade ago, today's insurance agency and brokerage owners still grapple with a timeless challenge: Be as productive and profitable as possible, and spend as little money as you can in the process.

For those of us charged with managing our companies' computing infrastructures, our responsibility is to rise to this challenge while we maximize the returns on our IT expenses. So, as IT professionals, how do we accomplish this goal? Simple: Architect and deploy the most innovative, manageable and cost-effective infrastructures we can to meet our industry's demands for today and tomorrow.

OK. So maybe it's not that simple. But what it really boils down to is making technology work for us instead of the other way around.

Over the past 15 years, I've worked in many different IT environments, from small offices to global information processing centers. I've seen my share of technological advancements. But every once in a while, a really significant breakthrough happens that changes not only the way we run IT, but also the way we—and the rest of the world—think about our work.

Recent innovations in server virtualization and cloud computing have essentially reinvented the way we deploy and operate our IT foundations. These advances have also opened up possibilities for us that were once cost-prohibitive and/or too complex to realize.

IT professionals are constantly challenging the status quo in an attempt to make sure that the technology we deploy meets the needs of our clients and minimizes the effort and cost required to manage it. Having the responsibility, for instance, of hosting and maintaining insurance automation products for thousands of agencies and brokerages across North America can be a daunting task.

The responsibility is equally daunting for many independent agents and brokers who locally host their own applications and automation tools. An important question is, "How does virtualization technology help to improve and streamline the delivery, availability, and security of your critical applications and their associated data?"

One of the most important benefits of virtualization has been to optimize our hardware investment. In the past, most server infrastructure was significantly underutilized. Virtualization has allowed us to reduce our server footprint by making better use of the physical hardware. Additionally, this has allowed us to reduce operating costs while improving our overall return on investment.

For smaller IT shops, there might be limited capital to invest in physical infrastructure. As a result, you need to find ways to do more with less, and virtualization makes that possible.

What's more, having to manage less physical equipment means you also reduce the risks involved with a hardware failure.

Virtualization solution providers including VMWare, Microsoft, and Citrix are making their product more beneficial for small to medium-sized businesses looking to reduce IT expenditures and increase productivity, agility, and capability. For larger organizations, virtual technology is already a standard component for effectively and intelligently delivering products and services while minimizing total cost of ownership. Maybe this is why all of the Fortune 100 companies are using virtualization in some capacity.

In many ways, we've come to take for granted a core benefit of virtualization technology—indeed, the very aspect of it that keeps us from falling victim to business interruption: the concept that applications critical to running our businesses can be resilient and remain available pretty much at all times.

In ideal situations, we hope for the best but plan for the worst. But in reality, we don't always consider all of the consequences we would endure in the event of an extended hardware failure that could take down a primary application.

Well, virtualization has a solution. When you virtualize servers and/or workstations, you are also virtualizing the applications and data running within them. You also remove dependency on the underlying hardware, so the applications become more portable. If a physical device fails, it would be possible to "move and restart" that virtual server on another available server or workstation. As a result, your downtime is minutes, not hours or days.

Another area to consider is capacity management, or the ability to forecast and respond to spontaneous growth. This is a critical part of managing an IT environment, as mismanaging this area has the potential for bringing an entire IT environment to a screeching halt. We need to be agile and proactive in how we maintain and provision additional server and storage resources.

Consider the challenges involved with implementing a new application or adding storage. Traditional methods of accomplishing these tasks involve hardware acquisitions; physical installations; increase of space, power and network port utilization; and, possibly, extended downtime. These steps ultimately reduce a business owner's profitability, disrupting that delicate balance of boosting profitability while cutting down on expenses. With a properly implemented virtual infrastructure, additional computing resources and storage capacity can be provisioned in minutes without the need for excessive capital investment or costly downtime.

All of this is just the tip of the iceberg. Imagine being able to quickly bring up new servers to install the next version of a software program. You would be able to evaluate the new product features, making sure the app meets your business needs. You would be able to train your users on how to use the app without ever having to touch your production system. What about being able to recover from a virus and/or malware infection? Or to rollback to a known good state from a failed application or operating system upgrade without the risk of losing data or productivity? How about performing product lifecycle management like replacing leased, antiquated and/or malfunctioning hardware without ever having to take your environment offline?

Those of us in the IT world weigh the options every day. We deal with our own delicate balance, tempering the thrill of what's possible with the reality of what's probable. Stay tuned as virtual technology continues to deliver on the promise of great benefits for the insurance community.

The author

Nabeel Sayegh is a data center engineering supervisor at Applied Systems, Inc., developer of agency management solutions including TAM®, Epic®, Vision®, and DORIS®.



--sidebar--

"The cloud" as retro mainframe?

If the business sector's move toward the "cloud" looks vaguely familiar to you, you're not alone. Since cloud computing has flourished over the past decade, some IT experts have noticed that the basic concept strongly resembles something thought to have breathed its last in the early 1990s: the mainframe.

The cycle has gone full circle, experts say. Among them is Greg Papadopoulos, former chief technology officer for Sun Microsystems. Long before Papadopoulos took his considerable experience as innovator extraordinaire and moved on to a venture capital firm for entrepreneurs, he noted the curious circle of IT life: The world had moved away from centralized mainframe computing in the late 1980s, to distributed computing a la home computers in the 1990s and 2000s, and now back to centralized computing in "the cloud."

As early as 2009, in their paper "The Datacenter as a Computer," Luiz André Barroso and Urs HÖlzle, then Google engineer and Google fellow, respectively, wrote that data centers can't be viewed just as a collection of servers. Those servers have to work together to deliver high levels of Internet-based service. "In other words, (continued on page 230) (continued from page 228) we must treat the data center itself as one massive warehouse-scale computer," the Google guys wrote. Sounds curiously like a mainframe. 

Imagine sitting at your desk in the 1970s. There would be your computer terminal, with its massive keyboard connected to a clunky, boxy CRT monitor offering a soulless black screen with alien-green text. That terminal didn't have much computing power on its own. It was a dummy terminal, sending instructions to the mainframe computer that gave it life and in turn did all the computing. And the network that connected them was secure.

The cloud's technology is light-years more advanced, of course. But the concept is eerily similar. Virtualization concentrates computing power on servers in data centers. The peripheral devices connected to them, through the Internet, act much in the same way as those dummy terminals of the 1970s. Desktops, laptops, tablets, and smartphones have access to the cloud, but they don't do the actual computing—at least for the applications hosted on the servers. Virtualization just makes them look as if they do, appearing to the casual observer much like the old mainframe/dummy terminal architectures.

 

Click thumbnail below to launch
story in our Flip Book edition

 

 

 
 
 

 

 
 
 

 

 
 
 

 

 
 
 
 
 
 
 

 

 
 
 

 


Return to Table of Contents