QoS and service level monitoring has always presented a challenge for telecommunications companies. With the increase in uptake of IP voice and video services, the vast data volumes generated, and the lack of an end to end view, make monitoring the service experience in real-time increasingly difficult.
In this blog I’m looking at the core building blocks of a real-time IP service monitoring solution by using a much simplified view of a real-time application. Diagram 1 illustrates the basic problem – how to monitor an IP service when the end to end view is only possible by piecing together large volumes of events from many different sources – the core network provider’s network, the home network and the cable modem, and the service providers platforms.
In SQLstream we capture each event stream in real-time. Applications are built as streaming pipelines – unlike a traditional database solution, where event data must first be stored and then processed, SQLstream streams the data through processing views, capturing, combining, filtering, aggregating and applying analytics to the events streams, without having to store the data. This enables real-time operational intelligence with extremely high volume performance with very low latency.
The first views in the pipeline capture the data streams. A declaration for an external data feed is shown below, the real-time
MyEvent stream, where
source of events is the external system agent or integration adapter.
CREATE OR REPLACE STREAM MyEvent ( "eventName" VARCHAR(10), "eventSeq" BIGINT, "eventVal1" INTEGER, "eventVal2" SMALLINT, "eventVal3" BIGINT ) DESCRIPTION 'source of events';
MyEvent data is first filtered, searching for the events of interest. As illustrated in the code example below, these initial views tend to be as simple as possible in order to maximize reuse – the simplest being a
SELECT STREAM * FROM WHERE statement. Streams can be combined, grouped or joined in a single view, or a single view provided per stream, or both.
CREATE OR REPLACE VIEW RawEvents AS SELECT STREAM * FROM MyEvent WHERE "eventName" = 'RawEvent';
Diagram 2 illustrates the concept of the streaming data pipeline, using a simplified example for exception detection. The SQL view illustrated above is the Stream Capture #1 view in the diagram. The use case is built on a real world example, raising an exception if a number of events of a particular type or value are detected within a specified time window.
The second view in the pipeline, Stream Processor #1, is shown below. In this example the view is responsible for the basic processing of the stream, counting the number of events that occur within a time window, in this case 180 seconds.
CREATE OR REPLACE VIEW CountedEvents AS SELECT STREAM *, COUNT("eventName") OVER win AS "eventCount", FIRST_VALUE(RE.ROWTIME) OVER win AS "firstEventTime", FIRST_VALUE("eventSeq") OVER win AS "firstEventSeq" FROM RawEvents AS RE WINDOW win AS (RANGE INTERVAL '180' SECOND(3) PRECEDING);
The final stage in this particular processing pipeline is the detection of the alert.
CREATE OR REPLACE VIEW FlagTriggerEvents AS SELECT STREAM *, "eventCount" >= 3 AS "alert" FROM CountedEvents;
It would of course be possible to include all processing in a single view. However, maximizing reuse of views is a major consideration when building a stream processing application. The example is to illustrate how a pipeline can be constructed, where each view can have any number of consumers. For example, any number of Rule views can read from the Stream Processor #1 view, and any number of views can read directly from the stream capture view.
The application includes significantly more sophisticated integrations, features and analytics than illustrated here. For example:
- Multiple rules
- Recording and forwarding the events responsible for the generation of the alerts
- Detect escalation
- Detect clearance events
- Join with alert history to identify exceptional events that deviate significantly from historical norms
These use cases are important components of a complete solution, and I’ll be providing examples in subsequent blogs, explaining how these have been implemented.