Once you configure your UXI dashboard to send test result and issues to Google BigQuery, you can immediately start using the data to analyze your networks and services.

The UXI data in BigQuery is partitioned by day (UTC day) for the time it was written to BigQuery. In addition, the issues, test results and test codes within the test results are in separate tables in order to optimize the queries. This allows you to extract and analyze only the data you are interested in.

This article will show some example queries.

Let's say you have several sensors at each site, you want to identify which sites have the most issues over the past week, and you want to know what those issues are and what networks have these issues. The query would look like this:

SELECT
code,
count(code) as occurrences,
context.hierarchy_node_name as site,
context.network_name as network_name,
context.service_name as service_name,
FROM
`<project>.<dataset>.issues_bigquery_<customer_uid>`
WHERE
DATE(_PARTITIONTIME) >= DATE_ADD(CURRENT_DATE(), INTERVAL -7 DAY)
AND status = 'CONFIRMED'
GROUP BY code,site,network_name,service_name
ORDER BY occurrences DESC

Let's say you want to know which issues are ongoing now. The query may look like this:

SELECT
T1.uid,
T1.sensor_name,
T1.network_name,
T1.service_name,
T1.code,
T1.status
FROM (
SELECT
uid,
context.sensor_name AS sensor_name,
context.network_name AS network_name,
context.service_name AS service_name,
code,
status
FROM
`<project>.<dataset>.issues_bigquery_<customer_uid>`
WHERE
DATE(_PARTITIONTIME) >= DATE_ADD(CURRENT_DATE(), INTERVAL -7 DAY)
AND status = 'CONFIRMED' ) AS T1
LEFT JOIN (
SELECT
uid,
context.sensor_name AS sensor_name,
context.network_name AS network_name,
context.service_name AS service_name,
code,
status
FROM
`<project>.<dataset>.issues_bigquery_<customer_uid>`
WHERE
DATE(_PARTITIONTIME) >= DATE_ADD(CURRENT_DATE(), INTERVAL -7 DAY)
AND status = 'RESOLVED' ) AS T2
ON
T1.uid=T2.uid
WHERE T2.uid is null
ORDER BY
sensor_name DESC

Lets say you made a network change May 8th and want to identify which sites have the worst DHCP response times since then. The query would look like this:

SELECT
avg(dora_time_milliseconds) as avg_dhcp_time,
context_sensor_name as sensor_name,
context_network_name as network_name
FROM
`<project>.<dataset>.test_results_bigquery_<customer_uid>_DHCP`
WHERE
DATE(_PARTITIONTIME) >= "2022-05-08"
GROUP BY sensor_name,network_name
ORDER BY avg_dhcp_time DESC

Did this answer your question?