Saturday, January 29, 2011

From reactive to proactive management of data services

Operators increasingly want to predict the future, manage the present, and learn from history, but what are the limits of today's traditional "assurance" and "monitoring" solutions and how do future solutions complement what operators already have?
Connected Planet spoke with Vesa Haimi, CEO and Founder of Iptune, which specializes in software solutions designed to manage value-added and mobile broadband services in IP-data networks. The company will have a booth at Mobile World Congress (Hall 1 Booth 1E19).

Connected Planet: What should operators look for in terms of data analysis, assurance and monitoring solutions?
Vesa Haimi: The goal for most organizations is to improve ARPU, which corresponds to how much performance and value is squeezed from each piece of vendor’s equipment in an operator’s network. To squeeze out optimal performance in multi-vendor environments, operators need vendor-agnostic monitoring tools and software that help to resolve the “finger pointing” so common in today’s complex environments.
Rather than be forced to buy into costly optimization services for each vendor’s equipment, operators should have objective views of every node involved in service delivery so they are not reliant on equipment vendors for accurate SLA management, license thresholds and capacity management decisions. Only with neutral, independent analysis can operators comprehend what network equipment met SLAs during the delivery of services.
In addition to objectivity, solutions should be proactive and flexible in nature. Operators feel the pressure to tackle data growth, introduce new bundles and charging plans, tweak policy control rules, and conduct content caching for effective shaping of traffic and service experiences. To accomplish all of that, operators have to move from reactive to proactive mindsets so they can understand the quality of service (QoS) and quality of experience (QoE) associated with their customers and the consumed services.
This type of proactive mindset requires flexibility in examining changing “use cases.” For example, if there is a noted performance degradation in traffic, operators should be able to see which end-user services have been impacted; which users suffered when accessing key services; which vendor resources were involved in the problem; and approximately how much revenue was lost.
In that vein, solutions should combine not only monitoring and assurance capabilities, but also business intelligence. Only with that combination can all facets of an organization—from management to technical—transform from reactive to proactive management of data services.
Connected Planet: With all the hype around data analysis, what should solutions really reveal?
Vesa Haimi: There are multiple inter-related “quadrants” operators should be able to understand with their solutions. If you think of the Rubik’s Cube®, there are different dimensions that can be manipulated and mixed as a person turns each side of the cube.
A sophisticated data analysis solution today should open visibility in the same manner. One quadrant revolves around three “pillars” of core importance—customer, service and vendor equipment. In each of those pillars, operators should have visibility into the past, present and future “states” as well as visibility into the services themselves (whether streaming video, MMS, Web browsing or others). Throughout the lifecycle of a service, they should be able to see which customers were using certain services, and on what equipment were those service reliant, as well as what was the experiences were at different times of the day.
For instance, one of our clients was surprised when we discovered that 10% of their customers used more than 25% of their capacity. They were also shocked at how much traffic Facebook was generating, and the significant differences in terminal success rates and consumption patterns that we found. Information like this provided the operator the ability to create better-targeted packages, as well as avoid poor response times at terminals under pressure from heavy users.
Also, by linking per-user data volumes to the time of day and specific equipment’s performance, we demonstrated to the customer the actual delivered capacity (as compared to the ‘claimed’ or ‘guaranteed’ performance).
Though history doesn’t repeat itself in an a precise manner, it does reveal patterns valuable to predicting future outcomes; therefore, a solution should take previous cases into account and reviewing them from different angles, operators can take actions that enable them to maximize and grow ARPU through smarter investments, which can be based on real insight.
Connected Planet: What’s the problem you see with traditional assurance and monitoring solutions?
Vesa Haimi: They tend to be bitpipe or equipment monitoring solutions that focus on data packets, technical KPIs and data derived from “log files” and Event Data Records. Much of the information is descended from application code written by IT people that in many instances no longer work for the organization.
As a result, operators sometimes spend days or even weeks having to do manual interventions, such as physical examinations of machines in efforts to collect data pertinent to whatever issues they want to resolve. After all of that, they are still be unable to solve the issues in many cases.
In other words, operators will continue to struggle to get some semblance of useful “knowledge” because these solutions lack correlated end-to-end visibility (including all transactions concerning a session).
It’s crucial that operators understand that an end-user service consists of several so-called supporting services, such as DNS, online charging, and so on, as each impacts the overall performance and QoS.
For instance, we found that with one of our customers, their success rate with browsing sessions dropped below 40% during busy hours, which had gone unnoticed because a license limitation in the browsing gateway limited the amount of concurrent sessions, and thus mitigated the experience and subsequent revenues associated with those sessions.
The operator’s existing solutions were unable to reveal the “hidden problem” since the overall performance appeared to be good and each network element appeared to be acting normally.
The bottom line is that operators seeking to understand customer experience and service quality have to be able to monitor transactions beyond the so-called data network entry point (Gi/Gn interface). Then, performance problems can be measured for each supporting service, as well as ensuing revenue leakages issues.
Connected Planet: Where is Iptune’s focus different?
Vesa Haimi: Where most solutions focus on access and core service, we focus on VAS/data services providing answers to day-to-day questions in the changing telecom IP environment about customers, services and vendors.
We say “your 20% is our 100%” because we focus on the 20% (industry average) that value-added services (VAS) represent of their overall service portfolios. And, though many companies may contend to offer end-to-end, correlated visibility of services, we do so in a manner independent of the vendors and equipment supporting those VAS services. We call this Quality-of-Vendor (QoV) performance evaluation, as we believe solutions today have to objectively provide information for overall vendor performance measurements. That is very different than “traditional” solutions, which see only entry points. Operators need to seek end-to-end service correlation for a unified view of services and “supporting-services.”
Additionally, a solution should add value to what the operators already has—whether business performance, service analysis, service security, or business forecasting. So we make sure our solution draws information from major information sources and then feeds that information to other OSSs. We want our customers to understand the connection between traffic data, revenue, and bottlenecks, as well as potential bottlenecks.source

No comments:

Post a Comment