Cindy Wang

10月 152022

In my previous blog Programmatically export a Visual Analytics report to PDF - SAS Users, I use the SAS Visual Analytics SDK to export a report to PDF, which is quite simple if we have basic knowledge with JavaScript programming. It works for both the latest version of SAS Viya and version 3.5. The new version of SAS Viya offers improvements and we have the option to export VA report to PDF -- using REST API, without need of JavaScript programming. This is what I’ll discuss in this post.

The API under Visual Analytics category in latest SAS Viya, provides the ability to export a report, or a report object, to a PDF file. It also provides the ability to create and run a job to do the exporting. Actually, we can export a report PDF, image, package, and data using the APIs. All are quite straight forward. In this article, I will show how to export a report or report object to PDF file directly, and how to create and run a job to export to a PDF.

Get all the API links of Visual Analytics

The API under Visual Analytics provides the ability to retrieve all the API links via the http ‘GET’ method. Be sure to set the "Accept" = "application/" in the HEADERS of PROC http. Below is my sample code snippet, I define a json library so we can view the output of PROC http visually.

%let BASE_URI=%sysfunc(getoption(SERVICESBASEURL));
PROC HTTP METHOD="GET" oauth_bearer=sas_services out=vaJason headerout=hdrout
    URL = "&BASE_URI/visualAnalytics/";
    HEADERS "Accept" = "application/";
LIBNAME vaJason json;

If we see the message of ‘200 OK’ returned (something like below), we know the PROC runs successfully.

Now in SAS Studio, if I go to the ‘Libraries’ tab, double click the LINKS table in the VAJASON library, all the API links of Visual Analytics are listed in the ‘href’ columns as shown below. We see the support of exporting the report PDF, image, package, and data with corresponding method and href.

Export a report or report object to PDF

Now, let me export a report to PDF directly. Below is the code snippet I used. With the FILENAME statement, the exported PDF will be saved in a physical location (I save it as rpt.pdf file in the /tmp/ folder). Be sure to set the "Accept" = "application/pdf" in the HEADERS of PROC http. In my example, I export a report with two report objects: a bar chart and a forecasting object.

%let BASE_URI=%sysfunc(getoption(SERVICESBASEURL));
FILENAME rptFile "/tmp/rpt.pdf"; 
PROC HTTP METHOD="GET" oauth_bearer=sas_services OUT=rptFile headerout=hdrout 
    URL = "&BASE_URI/visualAnalytics/reports/d940126c-f917-4a13-8e1a-51b6729f50ec/pdf";
    HEADERS "Accept" = "application/pdf"
            "Accept-Language" = "*"
            "Accept-Locale" = "en-US";

Run the code, and if we see the message of ‘200 OK’ returned, we know the export succeeded. We can go to the /tmp/ folder and check the rpt.pdf file there.

Next, let me export a report object to PDF. If you are not familiar with objects composition of a VA report, refer to my earlier post Discover Visual Analytics Report Paths with REST APIs. Different from exporting a report, I need to set the parameter ‘reportObjects’ for the exported object. With the ‘GET’ method in PROC http, I use the QUERY option to set all the parameters I want to use for the object. For example, I set some cover page text. Below is the code snippet for report object exporting.

%let BASE_URI=%sysfunc(getoption(SERVICESBASEURL));
FILENAME rptFile "/tmp/rpt.pdf"; 
PROC HTTP METHOD="GET" oauth_bearer=sas_services OUT=rptFile headerout=hdrout 
    URL = "&BASE_URI/visualAnalytics/reports/d940126c-f917-4a13-8e1a-51b6729f50ec/pdf"  
    QUERY = ("reportObjects"="ve58" "includeCoverPage"=true "coverPageText"="This is cover page for a report object.");
    HEADERS "Accept" = "application/pdf"
            "Accept-Language" = "*"
            "Accept-Locale" = "en-US";

Similarly, if we see the message of ‘200 OK’ returned, we know the export runs successfully. The following screenshots show the exported report PDF and the exported report object PDF, respectively.

Create and run a job to export a PDF

Besides exporting a report or report object directly, the API under Visual Analytics provides the ability to asynchronously execute the export job. The differences between directly export and job export are:

  • The ‘POST’ method is used for the job export action.
  • To export a report or report object by running a job, we need to apply the rendering option values in the request object, as well as options for the creation of the PDF.
  • Job export will save the export pdf file to the SAS Viya Content server folder, not a physical disk location. The PDF file can be then downloaded to local disk from SAS Studio or SAS Drive.

Below is the code snippet of creating a job to export report pdf. Be sure to set the "Accept" = "application/", and "Content-Type" = "application/" in the HEADERS of PROC http.

%let BASE_URI=%sysfunc(getoption(SERVICESBASEURL));
PROC HTTP METHOD="POST" oauth_bearer=sas_services headerout=hdrout 
    URL = "&BASE_URI/visualAnalytics/reports/d940126c-f917-4a13-8e1a-51b6729f50ec/exportPdf"  
     IN = '{
            "resultFolder": "/folders/folders/9d78f045-e7d9-4e82-b4aa-c7220cb85558",
            "resultFilename": "Exported PDF File.pdf",
            "nameConflict": "replace",
            "wait": 30,
            "timeout": 60,
            "options": {
                "orientation": "landscape",
                "paperSize": "A4",
                "showPageNumbers": true,
                "includeCoverPage": true,
                "coverPageText": "This is cover page for export pdf job."
            "version": 1
    HEADERS "Accept" = "application/"
            "Content-Type" = "application/" 
            "Accept-Language" = "*"
            "Accept-Locale" = "en-US";

If we see the message of ‘201 Created’ returned as shown below, we know the export job runs successfully.

Below screenshot shows the exported report PDF.


In summary, for the latest version of SAS Viya, the REST API under Visual Analytics category provides an easy way to export a report or a report object to a PDF file, either directly, or by a job asynchronously. We can also easily export the report object to image, the report data to CSV, TSV, XLSX, and the report resources to a package. You are encouraged to find more at Visualization – Visualization API Reference (

Export a Visual Analytics report using REST APIs was published on SAS Users.

9月 302022

Recently I was on an email thread where someone asked how to do a swimmer plot in SAS Visual Analytics. People replied with other ways using SAS code. Though there is not a standard swimmer plot in VA, I thought it might be possible to create one with a custom graph. So I decided to give it a try.

I reviewed some related materials about the swimmer plot and discovered a useful blog post by my colleague Sanjay Matange. His post provides SAS code to generate the data used by the swimmer plot as well, making things simple. My next step was to create a swimmer plot template in SAS Graph Builder and draw the plot in SAS Visual Analytics.

Create swimmer plot template

This will be done in SAS Graph Builder, I will create a custom graph template named ‘Swimmer Plot’.

The composition of the swimmer plot template

To make the Swimmer plot template, I use two schedule charts and four scatter plots, as shown below.

The template is made up with following charts:

a) Schedule Chart 1 will draw the High/Low bar representing the duration of each subject. It also needs to indicate the type of disease stage – Stage 1, 2, 3, 4.

b) Schedule Chart 2 will draw the Start/End line representing the duration of each response of each subject. It also needs to indicate the type of response - Complete or Partial.

c) Scatter Plot 1 will be used for the Start event, and Scatter Plot 2 for the End event.

d) Scatter Plot 3 will be used to indicate the Durable responder, and Scatter Plot 4 will show if the response is a Continued response.

Creating the swimmer plot template

In SAS Graph Builder, drag the above plots one by one to the work area. And then perform the settings as listed for each object in the ‘Option’ pane.

Next, we need to define roles for these plots in the ‘Roles’ pane.

1 - In the ‘Shared Roles’ section, click the toolbox icon next to the role ‘Shared Role 1’. Edit the role to update the Role Name to ‘Item’ and click OK button.

2 - In the ‘Schedule Chart 1’ section:

a) Click the toolbox icon next to the role ‘Schedule Chart 1 Start’. Edit the role to update the Role Name to ‘Low’ and click OK button.

b) Click the toolbox icon next to the role ‘Schedule Chart 1 Finish’. Edit the role to update the Role Name to ‘High’ and click OK button.

c) Add role by clicking the ‘+ Add Role’ link, update the Role Name to ‘Stage’, leave the Role Type as ‘Group’, check the ‘Required’ checkbox, and click OK button.

3 - In the ‘Schedule Chart 2’ section:

a) Click the toolbox icon next to the role ‘Schedule Chart 2 Start’. Edit the role to update the Role Name to ‘Start’ and click OK button.

b) Click the toolbox icon next to the role ‘Schedule Chart 2 Finish’. Edit the role to update the Role Name to ‘Endline’ and click OK button.

c) Add role by clicking the ‘+ Add Role’ link, update the Role Name to ‘Status’, leave the Role Type as ‘Group’, check the ‘Required’ checkbox, and click OK button.

4 - In the ‘Scatter Plot 1’ section, click the toolbox icon next to ‘Scatter Plot 1 X’, and select ‘Create Shared Role with Another Role’ > ‘Start’. Update the Role Name to ‘Start’ and click OK button.

5 - In the ‘Scatter Plot 2’ section, click the toolbox icon next to ‘Scatter Plot 2 X’, and select ‘Edit Role’, update the Role Name to ‘End’, and click OK button.

6 - In the ‘Scatter Plot 3’ section, click the toolbox icon next to ‘Scatter Plot 3 X’, and select ‘Edit Role’, update the Role Name to ‘Durable’, and click OK button.

7 - In the ‘Scatter Plot 4’ section, click the toolbox icon next to ‘Scatter Plot 4 X’, and select ‘Edit Role’, update the Role Name to ‘Continued’, and click OK button.

Now I am done with the creating the template. Save it as ‘Swimmer Plot’ in ‘My Folder’.

Prepare the data for the swimmer plot

I generated the data set from the Swimmer plot codes by Sanjay and updated the missing values in the ‘Stage’ column. This will avoid the missing value shown in VA. I put the generated CSV file here. Next, I need to prepare the data so it can be directly used to draw the swimmer plot in VA.

1 - Change the Classification of the ‘item’ column, from Measure to Category as shown below.

2 - Create a custom sort for the ‘item’ column. RMB the ‘item’ column in the ‘Data’ Pane and select ‘Custom sort…’ from the menu. In the ‘Add Custom Sort’ pop-up page, click the ‘Add all’ icon to have all the items sorted as below.

3 - Create a calculated item named ‘Continued’ as shown below, its expression is IF ( 'highcap'n NotMissing ) RETURN ( 'high'n + 0.2 ) ELSE ..

That’s all for the data preparation.

Create the swimmer plot in VA

We will first import the ‘Swimmer Plot’ template. In SAS Visual Analytics, go to the ‘Object’ pane, and click the toolbox icon. Select the ‘Import custom graph…’ from the pop-up menu and choose the ‘Swimmer Plot’ in the open dialog. Click OK button to import the graph template we just created. Now the ‘Swimmer Plot’ will be listed in the ‘Graph’ section in the ‘Object’ pane.

Next, drag the ‘Swimmer Plot’ object to canvas, and assign the corresponding data columns to the roles, SAS Visual Analytics will render the Swimmer Plot. To show more legends for the marks in the plot, I use an Image object. I put the ‘Swimmer Plot’ and the legend image in a Precision container. Now we see the chart as shown below.


With SAS Graph builder, we create the swimmer plot template using two schedule charts and four scatter plots. After importing the template in SAS Visual Analytics, we can create the swimmer plot easily by assigning corresponding data roles.

How to draw a swimmer plot in SAS Visual Analytics was published on SAS Users.

11月 102021

Time is a free resource to people yet is the most precious one. We all have 24 hours every day in our lives. We do not need to pay for getting these hours, and we do not have ways to pay for getting more than 24 hours a day. Have you ever noticed how you spend your time? Or how other people spend their time?

Certainly, there will be commonalities – for example, all people need time to sleep, to eat and many people need time to work and study. Also, for sure there are differences in how people divide their time for activities in each day. There might be some pattern of time use in different countries and different cultures. I am interested in exploring this, so I found some data from the web to explore.

What is a Time Use Survey?

Over the last 30 years, an increasing number of countries around the world have been conducting large-scale time use surveys. The Time Use Survey is designed to measure the amount of time people spend on various activities in their daily life, across a total duration of 24 hours (or 1,440 minutes). These activities, such as work, relaxing, and exercising, are classified into a set of descriptive categories, and the time on these activities are interviewed from some respondents. Then the data was recorded, calculated and edited.

I got the time use data from OECD (Organization for Economic Co-operation and Development) site, and the time use survey was conducted in more than 30 countries from 2009 to 2016. I also got the American Time Use Survey data for 2020 for my exploration. I am aware that the data quality might not be good enough for serious research, but that’s not a problem for me. I just want to explore it for fun, while practicing SAS Visual Analytics usage.

How do people around the world spend their time?

Download the excel file from OECD site, import it in SAS Visual Analytics. I will explore how people in different countries spend their time, how many minutes do they averagely spend on among the five categories (according to OECD, they put different activities into five categories).

We can easily draw a bar chart in SAS VA like below. Note: the downloaded OECD data has the time use data for American, but I eliminate it from this chart due to its total is 1,469 minutes (more than 24 hours a day). And that leads me to explore the American time use data separately.

See the green bars are the longest one among the five colored bars? They represent Personal Care. It seems people across these countries pay the most time in Personal Care. Unbelievable? Check the activities in the Personal Care category: sleeping, eating, dressing, and others personal care activities. All right, people sleep about 8 hours (480 mins) every day on average, that’s about 30% of a day. It makes sense that the Personal Care category occupies the most time (about 661 mins on average) in our daily life.

Now from another perspective, let’s see the top and bottom countries where people spend time on Personal Care, as well as on paid work/study. From below charts, I guess you won’t be surprised when seeing France sits on the top one country with most Personal Care time, and Japan sits on the top one country with most time on paid work/study, while Italy is the country where people spend least time on paid work/study.

Note in above charts, I use the same scale for the X axis intentionally. This is to make sure people get direct feel on the differences between the two categories, the ‘Paid work or study’ time on the right is less than half of the ‘Personal Care’ time on the left.

Furthermore, we can look at the distribution of these five categories across all these countries. Calculate the percentage for each major category using calculated items in VA and show them in a box plot. We see people on average spend about 46% of their time on Personal Care, about 20% on Leisure, and 19% on Paid work/study. The highest percentage in ‘Personal Care’ is about 52%, more than 12 hours every day. The least percentage is about 42%, that’s about 10 hours every day. Also notice that time on Personal Care, Leisure and Paid work/study are the top 3 categories that takes more than 85% time each day.

How do Americans spend their time?

As I mentioned, the American data from OECD is not ideal for me, so I downloaded the American Time Use Survey (ATUS) data, and using the 2020 data file for further exploration. The ATUS data was organized in different categories using different methodology than OECD data, so I must do data preparation in SAS Studio, and then explore in SAS Visual Analytics.

Prepare the data

The raw 2020 data file has 399 columns, and 8,782 rows. It contains data for the total number of minutes that each respondent spent on the 6-digit activity (per ATUS code rule). The column names with the letter "t" precedes the 6-digit codes, identify activities by their 6-digit codes. The first and second digits of the 6-digit code correspond to some tier1 code; the second and third digits correspond to some tier 2 code, etc. Each row corresponds to a unique respondent.

So my data preparation includes:

  • Classify the 6-digit activities to their corresponding tier1 codes, which comes to about 18 categories.
  • Calculate the means and 99% confidence interval for each of the 18 categories.
  • Transpose the dataset and merge the datasets. If you are interested in how I did this, you can get the code on GitHub..
  • The ATUS data set contains one column on Age, so I can make a custom category of age group in VA and divide the ages to three categories: less than 18, great than 65, and between 18 and 65. This will enable me to compare the ATUS data with the OECD data (whose ages are between 18 and 65).

Aggregate the data

ATUS contains detailed data from thousands of respondents with hundreds of columns. I need to aggregate the data for my exploration. Here are some tips when doing the aggregation for each of these hundreds of columns:

  1. The default aggregation for measure items in VA is Sum. We can easily change the aggregation in the data pane by clicking the ‘Edit properties’ icon and choosing other aggregation (I will use ‘Average’) in the ‘Aggregation’ dropdown list. But when I have hundreds of measure items in the ATUS data set, how can I quickly set the average aggregation for them instead of one-by-one? The tip is clicking on the first measure item, and scroll to the last item, press ‘shift’ when clicking the last item. This will select all these measure items. Right click the mouse, and from the pop-up menu, choose Aggregation > Average. This will set the aggregation to average for all the chosen items.
  2. I need a bunch of calculated items; each comprise lots of measure items. In SAS Visual Analytics, we can manually add each item in visual mode. But it’s too tedious to add so many measure items. The tip here is to write some SAS macro codes to generate the calculation expressions in text for me, then copy/paste the expression in text mode.

Explore the data

According to the ATUS code rule, ATUS uses different categories than OECD categories. To be able to compare the time used in major activity categories, I make the similar major activity categories like those from OECD, based on my personal understanding of the ATUS activities. Then with the bunch of calculated items, I get the time for these major activity categories. Due to methodology difference, be aware that this may lead the results to be partially inaccurate.

Now starts my ATUS exploration. Below charts show how people in American divide their daily time. The dataset has information on gender, so the bottom one shows the average percentage for Male and Female respectively.

When I put the percentage data (calculated for major activities categories) in a box plot, it has lot of outliers for each category. Considering different methodology and personal classification to the major activity categories (here is the OECD code), I see some difference than the OECD box plot. Note that the ranking for top two major activity categories are Personal Care and Leisure, the same pattern as in OECD data.

Identify the outliers

Notice those outliers in above box plot? I’d better explore more. In latest version, SAS Visual Analytics will automatically detect outliers in data items. This ‘Insights’ will list the data items in report objects that might be affected by outliers.

For example, in below screenshot, I made a histogram of the ‘Personal Care %’, which shows its distribution looks like normal. If I click the ‘Insight’ icon at top-right corner, VA will show all the data items that might be affected by outliers. If clicking the icon next to the ‘Personal Care %’ item at the bottom, a message will pop up saying that there are 243 outliers in this data item.

Create a custom graph

I saw lot of outliers in columns of ATUS data when exploring it, so I decide to use the mean value with confidence intervals. I created a custom graph with a scatter plot and a schedule chart. In SAS VA, assign the black dot in the custom graph to show the mean value and make the beginning and ending of each blue bar show the 99% confidence intervals.

Below is the top 10 ATUS activity categories (here are the ATUS tier 1 code categories) American people spend time on. We see the largest average time is the Personal Care, about 586 mins (nearly 10 hours) with 99% confidence intervals ranging from 583.5 to 588.4 mins.

That’s my initial exploration of Time Use Survey data, but much more can be done. For example, because ATUS data is collected on an ongoing, monthly basis, we can perform time-series analysis to identify changes in how people spend their time.

Would you like to give it a try? Visit the SAS Visual Analytics Gallery on the SAS Support Communities to see more ways you can use SAS Visual Analytics to explore data. Then sign up for a two-week free trial of SAS Visual Analytics.

EXPLORE NOW | SAS Visual Analytics Gallery
START FREE TRIAL | SAS Visual Analytics

How do people divide their time among daily activities? was published on SAS Users.

4月 292021

Ever heard of Mandelbrot set? I learned about it recently from an article introducing a book translated from the ‘Le Grand Roman des Maths’ by Mickaël Launay. I was impressed and thought I would see if I could draw one in SAS Visual Analytics.

Here are the seven steps I took:

1. Generate the data set

The first problem is where to get the data set. SAS documentation provides  this sample using DS2 and HPDS2 to generate the data set. I changed the code to make it run with SAS data step. When I run my code in SAS Studio, and the PROC GCONTOUR renders the graph shown below.

2. Assign data to numeric series plot

Now I get the generated Mandelbrot data set, which has about 360K rows and three numeric columns: p, q and mesh. Now it’s ready to upload the Mandelbrot dataset in SAS Visual Analytics – and I’m ready to start my drawing 😊.

I am going to use a numeric series plot to draw the graph. Realizing that the system data limitation for numeric series plot is 3,000, I need to override it by checking the ‘Options’ -> ‘Object’ -> ‘Override system data limit’. I reset it to 500,000 based on my VA server capacity. (This value should be adjusted based on your VA environment capability.)

Now assign the p to ‘X axis’, the q to ‘Y axis’, the mesh to ‘Group’ (be sure to change the classification of mesh to ‘Category’), and happily wait for the rendering of the graph.

Unfortunately, I got the message: ‘No data appears because too many values were returned from the query. Filter your data to reduce the number of values.’

3. Filter the data

So I must compromise to break the data into several parts with filters, and then use Precision layout to put them together.

I am using the q value to create the filter. Try a couple of times and something like “( 'q'n BetweenInclusive(-1.5, -0.9) ) OR ( 'q'n Missing )” works for me. Thus, I decide to break the whole data set into five parts for the span of the q (from -1.5 to 1.5) and draw each part in one numeric series plot.

 4. Use Precision layout

To put together all the parts with the filtered data in each numeric series plot, I need to have the Precision Container to hold all five plots. To put them together nicely, I need to adjust the options for the numeric series plots.

An easy way is to set for one and duplicate for others. Here are option settings I am using:

  • In ‘Object’ -> ‘Title’, set to ‘No title’;
  • Check the ‘Style’ -> ‘Padding’, and set the value to 0;
  • Uncheck the ‘Graph Frame’ -> ‘Grid lines’;
  • In ‘Series’ -> ‘Line thickness’: set to 1;
  • In ‘Series’ -> ‘Markers’ -> ‘Marker size’: set to 3;
  • Uncheck ‘X Axis Options -> ‘Axis label’, and uncheck ‘Tick values’;
  • Uncheck ‘Y Axis Options -> ‘Axis label’, and uncheck ‘Tick values’;
  • In ‘Legend’ -> ‘Visibility’, choose ‘Off’.

5. Duplicate and set layout for the five numeric series plots

Now, I have one numeric series plot that has options set up as described, and one filter on the q.

Next is to duplicate the numeric series plot four times and change the filter for each. What I want, is to have the five numeric series plots add up to the whole span of the q (from 1.5 to -1.5), from up to bottom.

For each of the numeric series plot, set the value in its ‘Options’ -> ‘Layout’ section as following. The Filter column is indicating the filter range of the q.

Filter for q Left Top Width Height
0.9 ~ 1.5 0% 0% 100% 20%
0.3 ~ 0.9 0% 18% 100% 20%
-0.3 ~ 0.3 0% 36% 100% 20%
-0.9 ~ -0.3 0% 54% 100% 20%
-1.5 ~ -0.9 0% 72% 100% 20%


6. Set Display Rules

With above steps, now the graph is rendered using the default colors in VA.

But I like the colors used by the codes, I want to change them using display rule. In the Display Rules tab, create a display rule with the mesh. And add each mesh value with the wanted color.

For example, if the mesh value is 3, look up the GOPTIONS segment in the codes and note that it uses the ‘CX003366’ color value. In SAS Visual Analytics, go to the Custom color tab of creating display rule. For the mesh value 3, enter ‘003366’ in the ‘Hex value’ box.

Of course, I need some patience to get all the mesh values colored with display rules.

7. Render the Mandelbrot set

And now, I have drawn the Mandelbrot set in SAS Visual Analytics. I also put a Text Object (‘Mandelbrot set’) below the graph to show what is graphing.

How do you like it? Just give it a try and have fun!

To learn more about Mandelbrot sets in SAS, read these posts by my Cary-based colleagues:


How to draw a Mandelbrot set in SAS Visual Analytics was published on SAS Users.

3月 252021

Readers of my earlier post Discover Visual Analytics Report Paths with REST APIs asked for ways to export SAS Visual Analytics (VA) report content programmatically. I know this is a topic of interest from many VA report designers. So, I think it’s better to write something on this and I hope this post can be of help for such requirements.

We all know SAS Visual Analytics provides ability to export reports to PDF in the product GUI. In addition, the REST API for visualization also provides APIs to save the entire report or report objects to SVG image. In this article, I will use the SAS VA SDK to export VA reports to a PDF file. Note: this task requires some basic knowledge with JavaScript programming; good thing is, it’s not that complicated.

The SAS VA SDK provides a set of components and APIs that enable you to render anything from the entire report down to individual report parts. I am going to show how to export VA report content to a PDF document.

The VA SDK requires several prerequisites be set up in SAS Viya. These steps are covered in the documentation and I’ll not detail them here. For reference, these may include enabling CORS, CSRF, HTTPS and Cross-site cookies. Also, the VA SDK provides options to connect to SAS Viya using 'credentials' or 'guest' authentication. If you want to have the report accessible by the 'guest' user, it needs the

<script async src=""></script>

*Note: I am using va-report-components@latest above to invoke the latest available version of SDK library. You may also indicate a specific version, such as @0.14.0 for version 0.14.0 of the SDK library.

Get the VA report URI

If you are not familiar with how to get the reportUri, refer to the 'Get the ReportURI' section in the Using REST API to transform a Visual Analytics Report post. In this example I received the following response to my API call: reportUri=/reports/reports/cbf97b0a-457d-4b4f-8913-547e0cdf390c.

Display the VA report in the web page

This can be done by embedding an HTML custom tag in the section of your web page. The VA SDK supports three types of HTML custom tags: the entire report, a report page, or individual report objects. Each type is introduced below.

  1. <sas-report>
  2. In the sample code below, the URL represents the SAS Viya server, the authenticationType is 'guest' or 'credentials', and the reportUri identifies the report to render.

  3. <sas-report-page>
  4. In the sample code below, the URL represents the SAS Viya server, the authenticationType is ‘guest’ or ‘credentials’, the reportUri identifies the report, and the pageName indicates which page within the report to render. You can use and actual page number or use pageIndex="0" which refers to the first page in the report. You can get the tag in SAS Visual Analytics, by clicking the 'Copy link…' menu item from the context menu of the page, choosing the 'Embeddable web component' option and clicking the 'Copy Link' button.

  5. <sas-report-object>
  6. In the sample code below, the URL represents the SAS Viya server, the authenticationType is 'guest' or 'credentials', the reportUri identifies the report, and the objectName gives the name of the object in VA report to render. You can get the tag in SAS Visual Analytics, by clicking the 'Copy link…' menu item from the context menu of any an object, choosing the 'Embeddable web component' option and clicking the 'Copy Link' button.


Make a function to export PDF

Remember when I said you’d need a little JavaScript knowledge? Well, the time is now. Follow the steps below to create a function in JavaScript which exports the report, page or object to a PDF file.

  1. Load the global vaReportComponents from SDK library. This is done by the window.addEventListener('vaReportComponents.loaded', function()) function.
  2. Next, get the report handle by calling the getReportHandle method on an object given by one type of three custom HTML tags. Something like myReport.getReportHandle(), myReportPage.getReportHandle(), or myReportObject.getReportHandle().
  3. Invoke the reportHandle.exportPDF(options) function to export the PDF. The options give the customized properties of the exported report. If no option is specified, the default value for the options is used. For example, the options can have the 'includeCoverPage: false' which means the exported PDF will not generate the cover page for the report. There are multiple options for the exportPDF function, please refer the VA SDK document for more info and its usage.

Put all together

Below are the snippets to generate the PDF document for a report page, using the options of no cover page and no appendix.

  1. I put a button in the HTML page, so I can click the button to trigger the export PDF function.
  2. The page displays the report page I am going to export. I’ve added id="sasReportPage" in the sas-report-page html tag, so I can get the DOM element by its ID quickly using the document.getElementById("sasReportPage") method.
  3. <html>
    <head> <meta http-equiv="content-Type" content="text/html"> 
    <script async src=""></script>
    <div id="buttons"> Export the PDF document of the VA report page by clicking the 
    <button type="button" class="btn_load" id ="PrintBtn" onclick="PrintPDF()"> EXPORT PDF </button> button. </div>
    <div >
            <sas-report-page id="sasReportPage"
                reportUri="/reports/reports/cbf97b0a-457d-4b4f-8913-547e0cdf390c" pageIndex=0>
        function PrintPDF() {
    		// load the global variable of vaReportComponents
                document.addEventListener('vaReportComponents.loaded', function(){});
                const myReport = document.getElementById("sasReportPage");
    		// get the report page handle
            myReport.getReportHandle().then((reportHandle) => {
    	  	    // set options – not include cover page and appendix of the report
    const options = {
                          includeCoverPage: false,
                          includeAppendix: false,
                   includedReportObjects: ["vi6"],
    	   	    // call the exportPDF function to export PDF document
                reportHandle.exportPDF(options).then((pdfUrl) => {
                          // Open the exported PDF in a new window
            , '_blank');
  4. Save the code above as an html page, so I can access it from a web server. For example, save to my localhost/myproj/mysdk.html. When I load the page successfully, it shows the VA report page embedded in my html page as below:
  5. VA report page embedded in my html page

  6. Now, clicking the 'EXPORT PDF' button, opens a new page with exported PDF document like below:
  7. Exported PDF document


In this post, we’ve learned how to use the SAS VA SDK to call reports, display them in a web page and export them to a PDF file. In the sample snippets, I used the sas-report-page html tag to export one page of a VA report. Change the html tag to sas-report accordingly, and you can easily export the whole VA report, or change it to sas-report-object to export an object in the VA report.

Programmatically export Visual Analytics a report to PDF was published on SAS Users.

9月 162020

There are three types of visualization APIs defined in the SAS Viya REST API reference documetation: Reports, Report Images and Report Transforms. You may have seen the posts on how to use Reports and Report Images. In this post, I'm going to show you how to use the Report Transforms API. The scenario I am using changes the data source of a SAS Visual Analytics report and saves the transformed report.

Overview of the Report Transforms API

The Report Transforms API provides simple alterations to SAS Visual Analytics reports, and it uses the 'application/' media type for the transformation (passed in the REST API call header). When part of a request, the transform performs editing or modifications to a report. If part of a response, the transform describes the operation performed on the report. Some typical transformations include:

  • Replace a data source of a report.
  • Change the theme of a report.
  • Translate labels in a report.
  • Generate an automatic visualization report of a specified data source and columns.

To use the Transforms API, we need to properly set the request body and some attributes for the transform. After the transform, the response contains the transformed report or a reference to the report.

Prepare the source report

This step is very straight-forward. In SAS Visual Analytics, create a report and save it to a folder (instructions on creating a report are found in this video). For this example, I'll use the 'HPS.CARS' table as the data source and create a bar chart. I save the report with name 'Report 1' in 'My Folder'. I'll use this report as the original report in the transform.

Generate the request body

I will use PROC HTTP to call the Transforms API using the 'POST' method and appending the URL with '/reportTransforms/dataMappedReports'. The call needs to set the request body.

  1. Get the ReportURI: In an earlier post I outlined how to get the reportURI via REST API, so I won't go into details. If you'd like an easy way, try this: in SAS Visual Analytics, choose 'Copy Link…'item from the menu. In the pop-up dialog, expand the 'Options' and choose 'Embedded Web Component', and you see there is a string in the form reportUri='/reports/reports/…', that's it. In the request body, we set the string to the 'inputReportUri' to specify the original report - the 'Report 1'.
  2. Report URI from SAS Visual Analytics

  3. Decide on changes to the data source: Here I’d like to change the data source from ‘HPS.CARS’ to ‘CASUSER.CARS_NEW’. The new table uses three columns from ‘HPS.CARS’ as mapped below.
  4. Select columns to include in new table

  5. Specify the data sources in the request body: The request requires two data sources, 'original' and 'replacement', respectively, representing the data sources in original report and the transformed report. Note that the 'namePattern' value is used to enumerate the way of identifying the data source. If it is set to 'uniqueName', the data source is identified by its unique data item name in the XML file of the report. If it is set to 'serverLibraryTable', the data source is identified by the CAS Server, CAS Library and Table names together. The snippets below show the data source section in the request body. I like to use the 'serverLibraryTable' to specify the data source for both original and transformed report, which is clear and easy.
  6. /* data source identification for original report */
        "namePattern": "serverLibraryTable",
        "purpose": "original",
        "server": "cas-shared-default",
        "library": "HPS",
        "table": "CARS"
    /* data source identification for transformed report */
        "namePattern": "serverLibraryTable",
        "purpose": "replacement",
        "server": "cas-shared-default",
        "library": "CASUSER",
        "table": "CARS_NEW",
        "replacementLabel": "NEW CARS",
        "dataItemReplacements": [
            "originalColumn": "dte",
            "replacementColumn": "date"
            "originalColumn": "wght",
            "replacementColumn": "weight"
            "originalColumn": "dest",
            "replacementColumn": "region"

    Set more attributes for transform

    Besides the request body, we need to set some other attributes for the transform API when changing the data source. These include 'useSavedReport', 'saveResult', 'failOnDataSourceError' and 'validate'.

    • useSavedReport specifies whether to find the input (original) report as a permanent resource. Since I am using the saved report in the repository, I will set it to true.
    • saveResult specifies to save the transformed report permanently in the repository or not. I am going to save the transformed report in the repository, so I set it to true.
    • failOnDataSourceError specifies whether the transform continues if there is a data source failure. The default value is false, and I leave it as such.
    • The validate value decides if the transform will perform the XML schema validation or not. The default value is false, and I leave it is as such.

    Decide on a target report and folder

    I'll save the transformed report with the name 'Transformed Report 1' in the same folder as the original 'Report 1'. I set the 'resultReportName' to 'Transformed Report 1', and set the 'resultReport' with 'name" and 'description' attributes. I also need to get the folderURI of the 'My Folder' directory. You may refer my previous post to see how to get the folderURI using REST APIs.

    Below is the section of the settings for the target report and folder:

    "resultReportName": "Transformed Report 1",
    "resultParentFolderUri": "/folders/folders/cf981702-fb8f-4c6f-bef3-742dd898a69c",
    "resultReport": {
    			    "name": "Transformed Report 1",
    			    "description": "TEST report transform"

    Perform the transform

    Now, we have set all the necessary parameters for the transform and are ready to run the transform. I put my entire set of code on GitHub. Running the code creates the 'Transformed Report 1' report in 'My Folder', with the data source changing to CASUSER.CARS_NEW', containing the three mapped columns.

    Check the result

    If the API failed to create the transformed report, the PROC SQL statements displays an error code and error message. For example, if the replacement data source is not valid, it returns errors similar to the following.

    Invalid data source error message

    If the API successfully creates the transformed report, “201 Created” is displayed in the log. You may find more info about the transformed report from the response body of tranFile from PROC HTTP. You can also log into the SAS Visual Analytics user interface to check the transformed report is opened successfully, and the data sources are changed as expected. Below is the screenshot of the original report and transformed report. You may have already noticed they use different data sources from data labels.

    Original and transformed reports in SAS Visual Analytics


    There are a wealth of other transformations available through the Report Transform APIs. Check out the SAS Developer site for more information.

Using REST API to transform a Visual Analytics Report was published on SAS Users.

8月 212020

SAS Viya is an open analytics platform accessible from interfaces or various coding languages. REST API is one of the widely used interfaces. Multiple resources exist on how to access SAS Visual Analytics reports using SAS Viya REST API. For example Programmatically listing data sources in SAS Visual Analytics by my colleague Michael Drutar. His post shows how to list the data sources of VA reports. Also, in Using SAS Viya REST APIs to access images from SAS Visual Analytics, Joe Furbee demonstrates how to retrieve report images. In this post, I am going to show you how to get the path for SAS Visual Analytics reports using REST APIs.

Full API reference documentation for SAS REST APIs is on You can exercise REST APIs in several ways such as curl, browsers, browser plugins, or any other REST client. Here I am going to access the SAS Viya Visualization and Core Services REST API with SAS Code. The Visualization service APIs provide access for reports, report images, and report transforms. The Core Services APIs provides operations for resources like folders, files, authorization, and so on.

Composition of a report object

The chart below describes the object composition of VA reports, from an API perspective. We see the report object itself has metadata storing the report properties like id, name, creator, modified date, and links, etc. Each VA report object is identified uniquely by its ID in SAS Viya. The report content object, presented in either XML or JSON format, is stored separately from the report object. The report content object enumerates the data and image resources, generating visual elements such as graphs, tables, and images.

Reports API definition

Get a list of reports

Let's begin with a scenario of getting a list of reports. These reports may be returned from a search or a filter in Viya, or a list you've got at hand. (The SAS Viya support filter link has more information on using the filter.) Here I'm using a filter to get a list of reports named 'Report 2'. I use Proc HTTP to access the Reports API in the Visualization service with a 'GET' request and '/reports/reports?filter=eq(name,'Report 2')' in the URL. Note, the HEADERS of Proc HTTP need to be set properly to generate expected results. Below is a snippet for this.

%let BASE_URI=%sysfunc(getoption(SERVICESBASEURL));
PROC HTTP METHOD = "GET" oauth_bearer=sas_services OUT = rptFile
      /* get a list of reports, say report name is 'Report 2' */
      URL = "&BASE_URI/reports/reports?filter=eq(name,'Report 2')";
      HEADERS "Accept" = "application/"
               "Accept-Item" = "application/";
LIBNAME rptFile json;

The results of running the code above returns a list in the ITEMS table, in the rptFile json library. It returns about 10 reports with the same name of 'Report 2', each with a unique id.

ITEMS table report for 'Report2' query

Get the report content object of a VA report

Using the Reports API of the Visualization service, we can get the report content object of a VA report. As shown in the snippet below, by making a 'GET' request to the SAS Viya server followed by the '/reports/reports//content' in the URL, the report content object is retrieved.

%let BASE_URI=%sysfunc(getoption(SERVICESBASEURL));
PROC HTTP METHOD="GET" oauth_bearer=sas_services OUT=rptFile
           URL = "&BASE_URI/reports/reports/<report id>/content";
           HEADERS "Accept" = "application/";
LIBNAME rptFile json;

In the output, we see the rptFile json library enumerates the data and image resources in the report content object. Below shows what I retrieved from a report content object.

Contents of rptFile json library

Notice the DATASOURCES_CASRESOURCE table, which Michael uses in Programmatically listing data sources in SAS Visual Analytics. You may explore more information in these tables if interested, such as report states, visual elements, etc. In this post, I won't dig further into the report content object.

Get the metadata of a report object

Next, I am going to get the metadata of a report object with its unique report id using the Reports API in the Visualization service. I use the 'GET' request and '/reports/reports/' in the URL. By runing the code snippet below, I get the metadata of the report object in the rptFile json library.

%let BASE_URI=%sysfunc(getoption(SERVICESBASEURL));
PROC HTTP METHOD="GET" oauth_bearer=sas_services OUT=rptFile
        URL = "&BASE_URI/reports/reports/cecac7d7-b957-412e-9709-a3fe504f00b1";
        HEADERS "Accept" = "application/";
LIBNAME rptFile json;

Below is part of the ALLDATA table from the rptFile library. The table contains metadata of the report object, including its unique id, name, creator, creationTimeStamp, modifiedTimeStamp, links, and so on. But in the table, I can't find the folder location of the report object.

ALLDATA table from the rptFile library

Get the report object folder location

So far, I've retrieved most of the metadata info we are looking for, but not the report object folder location. All VA reports are put under the /SAS Content/ folder or its subfolders in SAS Viya. Yet, no such information exists in the report object or the report content object. How can I get the path of a VA report under the /SAS Content/ folder?

The answer is to use the Folders service on the Core Services API. Folders provide an organizational structure for SAS content as well as external content in Viya. The Folders object itself is a virtual container for other resources or folders, and it persists only the URI of resources managed by other services.

A folder object has two types of members: child and reference. Whereas resources can have references in multiple folders, they are restricted to being the child in a single folder. Resources like VA reports are added as child members of a folder, and the folder persists the URI of the is VA report. Thus, we get the folder reversely from the child report by looking for the ancestors of this report object.

By using the Folders API in Core services with a 'GET' request and '/folders/ancestors?childUri=' in the URL, the Proc HTTP code below gets the ancestors of the VA report before getting the full path.

%let BASE_URI=%sysfunc(getoption(SERVICESBASEURL));
PROC HTTP METHOD="GET" oauth_bearer=sas_services OUT=fldFile
          URL = "&BASE_URI/folders/ancestors?childUri=/reports/reports/cecac7d7-b957-412e-9709-a3fe504f00b1";
          HEADERS "Accept" = "application/";
LIBNAME fldFile json;

From the fldFile.ANCESTORS table, we see the metadata of the ancestor folders, including folder id, folder name, creator, type, and its parentFolderURI, etc. The screenshot below is part of the ANCESTORS table. Thus, the path of the specific report concatenates these subfolders to a full path of /SAS Content/NLS/Cindy/.

Folder path detailed in the ANCESTORS table

Get the path for VA reports

Now I have several reports, I need to go through the above steps repeatedly for each report. So, I wrote SAS code to handle these:

  1. Filter those reports named 'Report 2', using the reports API in Visualization service. Save the list of reports in the ds_rpts dataset. The results include metadata for report id, name, createdBy, CreatedAt, and LastModified.
  2. For each report in the ds_rpts data set, call the macro named 'save_VA_Report_Path(reportURI)'. The macro accesses the Folders API in Core Services, and saves the path for a given report back in the rptPath column of the ds_rpts data set.
  3. Print the list of reports with path and other metadata.

The code yields the following output:

List of reports with paths and metadata

You may access my code samples from GitHub and give it a try in your environment. I run the code with SAS Studio 5.2 and VA on SAS Viya 3.5. You may prefer to modify the filter condition as needed (such as createdBy, contains, or more from SAS Viya support filter).


The Reports API is one of many SAS Viya REST APIs. In this post, I've provided multiple discovery paths to follow. You can find more information about this and other APIs on the SAS Viya REST APIs page on the developers portal.

Discover Visual Analytics Report Paths with REST APIs was published on SAS Users.

9月 132019

Time-series decomposition is an important technique for time series analysis, especially for seasonal adjustment and trend strength measurement. Decomposition deconstructs a time series into several components, with each representing a certain pattern or characteristic. This post shows you how to use SAS® Visual Analytics to visually show the decomposition of a time series so that you can understand more about its underlying patterns.

Characteristics of time series decomposition

Time series decomposition generally splits a time series into three components: 1) a trend-cycle, which can be further decomposed into trend and cycle components; 2) seasonal; and 3) residual, in an additive or multiplicative fashion.

In additive decomposition, the cyclical, seasonal, and residual components are absolute deviations from the trend component, and they do not depend on trend level. In multiplicative decomposition, the cyclical, seasonal and residual components are relative deviations from the trend. Thus, we often see different magnitudes of seasonal, cyclical and residual components when comparing with the trend component, while the trend component keeps the same scale as the original series.

How to begin a time series decomposition

SAS provides several procedures for time series decomposition, I use the PROC Timeseries in this post. Now the first step is to decide whether to use additive or multiplicative decomposition. You know SAS PROC Timeseries provides multiplicative (MODE=MULT), additive (MODE=ADD), pseudo-additive (MODE=PSEUDOADD) and log-additive (MODE=LOGADD) decomposition. You can also use the default MODE option of MULTORADD to let SAS help you make a decision based on the feature of your data. Good thing is, you can always use the log transformation whenever there is a need to change a multiplicative relationship to an additive relationship. The plot option in PROC Timeseries can produce graphs of the generated trend-cycle component, seasonal component and residual component. In this post, I would like to output the OUTDECOMP dataset from PROC Timeseries, load the data and visualize the decomposed time series with SAS Visual Analytics to understand more about their patterns.

See how it's done

I decompose the time series in the SASHELP.AIR dataset as an example. The series involves data about international air travel with monthly data points from Jan 1949 to Jan 1961, as pictured below:

We see an obvious upward trend and significant seasonality in the original series, with more and more intensive fluctuation around the trend. This indicates that the multiplicative decomposition of trend and seasonality components is more appropriate. I get the decomposed components using this SAS code. Here I do not explicitly give the mode option first, and let SAS use the default MODE=MULTORADD option. Since the values in this time series are strictly positive, SAS eventually specifies the MODE=MULT to generate the decomposed series in the OUTDECOMP dataset (see details in the document).

When you load the data set into SAS Visual Analytics and make visualizations, it’s very straight forward to draw a time-series plot showing the decomposed series, respectively.

Note that the magnitudes of the Trend-Cycle-Seasonal and Trend-Cycle components are much larger than those of the Seasonal, Irregular and Cycle components. The upward trend and increasing volatility of the Trend-Cycle-Seasonal component reveal an obvious multiplicative composition of Trend-Cycle and Seasonal components. The formula should be: Trend-Cycle-Seasonal Component = Trend-Cycle Component * Seasonal Component.

Can you visually show the multiplicative relationship in the series?

I can easily make the log transformation of the decomposition series using the calculated item in SAS Visual Analytics, and accordingly show the additive relationship of the transformed series. The visualization below shows the additive relationship of the log transformation of the Trend-Cycle-Seasonal component with the log transformations of Trend-Cycle component and Seasonal component, which is the equivalent of the pre-transformed multiplicative relationship.

In the visualization below, the moss-green line series at the bottom of the chart shows the Log Seasonal component, with each vertical black line representing its value. The lines at the top show that the value of the orange line series (the Log Trend-Cycle component) adds to the value that the mint-green vertical lines (value of the Log Seasonal component) will make to the pine-green line series (the Log Trend-Cycle-Seasonal component).

In the list table, note that the value of the calculated item 'Trend-Cycle Component * Seasonal Component’ is equal to the 'Trend-Cycle-Seasonal Component' value highlighted in blue, which indicates the multiplicative composition of 'Trend-Cycle Component' and 'Seasonal Component' to the 'Trend-Cycle-Seasonal Component.' Also, summation of the calculated item 'Log Trend-Cycle Component' and the 'Log Seasonal Component' is equal to the value of 'Log Trend-Cycle-Seasonal Component' in light green. They verify the multiplicative and additive relationships, respectively.

More ways to expose and view patterns

Besides the above multiplicative decomposition, we can dig for more multiplicative or additive relationships from the original series and the decomposed series. Here are the formulas:

Original Series = Trend-Cycle-Seasonal Component * Irregular Component

Seasonal-Irregular Component = Seasonal Component * Irregular Component

Original Series = Seasonal Adjusted Series * Seasonal Component

Trend-Cycle Component = Trend Component + Cycle Component 1

[ 1 Note: Despite setting the MODE=MULT option, SAS Proc Timeseries uses the Hodrick-Prescott filter, which always decomposes the trend-cycle component into the trend component and cycle component in an additive fashion. ]

Considering the decomposed dataset from various time series will have the fixed structure as shown below, we can easily apply the visualizations in SAS Visual Analytics to the decomposed series from different time series. Just applying the new dataset, all the calculated items will be inherited accordingly, and the new data will be applied to the visualizations automatically. That’s the thing I like most for visualizing time series decomposition in SAS Visual Analytics.

A final decomposition comparison

Let’s compare the multiplicative decomposition and the additive decomposition of the same series. Note the Trend-Cycle components (as well as Trend component and Cycle component) from multiplicative and additive decomposition are the same, meaning that the seasonal component is decomposed differently in multiplicative and additive decomposition.

In the screenshot below, we see that the two seasonal components have similar seasonal fluctuation style, but the value of seasonal components are largely different between multiplicative and additive decomposition. Different decomposition method also leads to different Trend-Cycle-Seasonal component, Irregular component and Seasonal-Irregular component. In addition, we see still some patterns there in the Irregular component from additive decomposition.

But in multiplicative decomposition, the Irregular component seems more random-like. Thus, the multiplicative decomposition is a better choice than additive decomposition for SASHELP.AIR time series.

PROC Timeseries provides classical decomposition of time series, and SAS has other procedures that can perform more complex decomposition of time series. If you want to visualize time series decomposition in a way you like, give SAS Visual Analytics a try!

SAS® Visual Analytics on SAS® Viya® Try it for free!

How to Visualize Time Series Decomposition using SAS Visual Analytics was published on SAS Users.

8月 282019

Moving Average (MA) is a common indicator in stocks, securities and futures trading in financial markets to gauge momentum and confirm trends. MA is often used to smooth out short-term fluctuations and show long-term trends. But most MA indicators have big lags in signaling a changing trend. To be faster to capture a trend reversal, several New MA indicators are now available that more quickly detect trend changes – and of those, the Hull Moving Average (HMA), is one of the most popular. This post demonstrates its superiority.

A closer look at HMA

Developed by Alan Hull, it's faster and thus a more useful signal than others. Its main advantage over general MA indicators is its relative smoothness as it signals change. Commonly-used MA indicators include Simple Moving Average (SMA), Weighted Moving Average (WMA) and so on. SMA calculates the arithmetic mean of the prices, which gives individual value equal weight. WMA averages individual values with some predetermined weights.

Since moving averages are computed from prior data, all MA indicators suffer a significant drawback of being a lagging indicator. Even in a shorter-period of moving average, which has less lag than one with a longer period, a stock price may drop sharply before a MA indicator signals the trend change. The Hull Moving Average (HMA) uses weighted moving average and the square root of the period instead of the actual period itself, which leads it to be more responsive to most recent price activity, whilst maintaining smoothness.

According to Alan Hull, the formula for HMA is:

We see that the major computing components in HMA are three WMAs. Refer to the specification here, we have the corresponding WMA formula as pictured below. In the WMA formula, the weight of each price value is related to the position of the value and the period length. The more recent the higher weights, and the shorter of the period the higher weights.

HMA in action

In the remainder of this post, I will show how to calculate HMA of a stock price using calculated items in SAS Visual Analytics and show that HMA gives faster upward/downward signals than SMA. I use the data from SASHELP. STOCK with ‘IBM’ as an example. The data needs to be sorted by the date and a column (named ‘tid’) added to hold the sequence number before loading into SAS Visual Analytics for calculation. The data preparation codes can be found here. After loading the data into SAS Visual Analytics, we can start by creating the calculated items. Here, I set the period length to 5 in calculation (i.e. =5 in the formula) and calculate HMA for ‘Close’ price of IBM stock for example.

1. Calculate the first WMA like so...

... using the AggregateCells operator in SAS Visual Analytics. I name it as 'WMA(5/2 days)'. Have the data value in, note I’ve rounded the (5⁄2) to an integer of 3. That is, the aggregation is starting from the previous two (-2) row and ending at current row (0). The corresponding formula of the calculated item ‘WMA(5/2 days)’ in SAS VA is:

AggregateCells(_Sum_, ( 'Close'n * 'tid'n ), default, CellIndex(current, -2), CellIndex(current, 0)) / AggregateCells(_Sum_, 'tid'n, default, CellIndex(current, -2), CellIndex(current, 0))


2. Similarly, calculate the second in SAS Visual Analytics:

Name it as ‘WMA(5 days)’. The corresponding formula is:
AggregateCells(_Sum_, ( 'Close'n * 'tid'n ), default, CellIndex(current, -4), CellIndex(current, 0)) / AggregateCells(_Sum_, 'tid'n, default, CellIndex(current, -4), CellIndex(current, 0))

3. Now we calculate the HMA, which computes the third WMA using the two WMAs we get from above calculation. In SAS Visual Analytics, if we directly apply a similar approach for the last WMA calculation, it will show message of operands requiring group. So here, I need a workaround to make the aggregation work.

4. To work around the problem, I create an aggregated item named ‘sumtid’ to indicate the row sequence number in an aggregation way. To do this, firstly create a calculated item named ‘One’ with the constant value 1; then use AggregateCells operator creating the ‘sumtid’ to get the current row number: AggregateCells(_Sum_, 'One'n, default, CellIndex(start, 0), CellIndex(current, 0)).

5. Now we can compute the HMA in a similar way as we do for previous two WMAs. Name it as ‘HMA for close (5 days)’. Since int(√(5 ))=2, the starting position of the aggregation is set to the previous row (-1) and the ending position is set to the current row (0). Note the operands are now using the aggregated item ‘sumtid’. The formula for the ‘HMA for close (5 days)’ item is:

AggregateCells(_Sum_, ( ( ( 2 * 'WMA(5/2 days)'n ) - 'WMA(5 days)'n ) * 'sumtid'n ), default, CellIndex(current, -1), CellIndex(current, 0)) / AggregateCells(_Sum_, 'sumtid'n, default, CellIndex(current, -1), CellIndex(current, 0))

So far, we’ve created the Hull Moving Average of IBM stock Close price and saved it in the calculated item ‘HMA for close (5 days)’. We can easily draw its time series plot in SAS Visual Analytics. Now, I'll create a Simple Moving Average of ‘SMA for the close (5 days)’ with an equal weight, and then compare it with the HMA. The formula for ‘SMA for the close (5 days)’ is: AggregateCells(_Average_, 'Close'n, default, CellIndex(current, -4), CellIndex(current, 0))

Now let’s visualize the ‘SMA for the close (5 days)’ and ‘HMA for close (5 days)’ respectively. In below chars, each grey vertical bar shows the monthly price span of IBM stock, and the red lines correspond to SMA and HMA respectively. With the upper SMA line, we see constant lags with price changing and poor smoothness. And with bottom HMA line, we see rapid keep-up with price activities while maintaining good smoothness.

Below is the comparison of the ‘SMA for the close (5 days)’, ‘HMA for close (5 days)’ and the Close price. Besides smoothing out some fluctuations in Close price, the HMA indeed gives better signal than SMA does in indicating a turning point when there is an upward/downward trend reversal. Note the obvious lags of SMA compared to HMA. For example, compare the trends around the reference line in the visualization below. The Close price reached to a local peak at Jun1992 and started to go down from Jul1992. HMA quickly reflected the downward turn with one lag at Aug1992, while SMA still showed the rising trend in the meantime. SMA started to go down with one more lag to give the reversal signal.

Now it’s easy to understand why HMA is a better indicator than SMA to signal the reversal point. What has been your experience with HMA?

How to Calculate Hull Moving Average in SAS Visual Analytics was published on SAS Users.

8月 172018

Data density estimation is often used in statistical analysis as well as in data mining and machine learning. Visualization of data density estimation will show the data’s characteristics like distribution, skewness and modality, etc. The most widely-used visualizations people used for data density are boxplot, histogram, kernel density estimates, and some other plots. SAS has several procedures that can create such plots. Here, I'll visualize the kernel density estimates superimposing on histogram using SAS Visual Analytics.

A histogram shows the data distribution through some continuous interval bins, and it is a very useful visualization to present the data distribution. With a histogram, we can get a rough view of the density of the values distribution. However, the bin width (or number of bins) has significant impact to the shape of a histogram and thus gives different impressions to viewers. For example, we have same data for the two below histograms, the left one with 6 bins and the right one with 4 bins. Different bin width shows different distribution for same data. In addition, histogram is not smooth enough to visually compare with the mathematical density models. Thus, many people use kernel density estimates which looks more smoothly varying in the distribution.

Kernel density estimates (KDE) is a widely-used non-parametric approach of estimating the probability density of a random variable. Non-parametric means the estimation adjusts to the observations in the data, and it is more flexible than parametric estimation. To plot KDE, we need to choose the kernel function and its bandwidth. Kernel function is used to compute kernel density estimates. Bandwidth controls the smoothness of KDE plot, which is essentially the width of the sliding window used to generate the density. SAS offers several ways to generate the kernel density estimates. Here I use the Proc UNIVARIATE to create KDE output as an example (for simplicity, I set c = SJPI to have SAS select the bandwidth by using the Sheather-Jones plug-in method), then make the corresponding visualization in SAS Visual Analytics.

Visualize the kernel density estimates using SAS code

It is straightforward to run kernel density estimates using SAS Proc UNIVARIATE. Take the variable MSRP in SASHELP.CARS dataset as an example. The min/max value of MSRP column is 10280 and 192465 respectively. I plot the histogram with 15 bins here in the example. Below is the sample codes segment I used to construct kernel density estimates of the MSRP column:

title 'Kernel density estimates of MSRP';
proc univariate data = noprint;	
   histogram MSRP / kernel (c = SJPI) endpoints = 10280 to 192465 by 12145 outkernel = KDE  odstitle = title; 

Run above code in SAS Studio, and we get following graph.

Visualize the kernel density estimates using SAS Visual Analytics

  1. In SAS Visual Analytics, load the SASHELP.CARS and the KDE dataset (from previous Proc UNIVARIATE) to the CAS server.
  2. Drag and drop a ‘Precision Container’ in the canvas, and put a histogram and a numeric series plot in the container.
  3. Assign corresponding data to the histogram plot: assign CARS.MSRP as histogram Measure, and ‘Frequency Percent’ as histogram Frequency; Set the options of the histogram with following settings:
    Object -> Title: No title;

    Graph Frame: Grid lines: disabled

    Histogram -> Bin range: Measure values; check the ‘Set a fixed bin count’ and set ‘Bin count’ to 15.

    X Axis options:

       Fixed minimum: 10280

       Fixed maximum: 192465

       Axis label: disabled

       Axis Line: enabled

       Tick value: enabled

    Y Axis options:

       Fixed minimum: 0

       Fixed maximum: 0.5

       Axis label: disabled

       Axis Line: disabled

       Tick value: disabled

  1. Assign corresponding KDE data to the numeric series plot. Define a calculated item: Percent as (‘Percent of Observations Per Data Unit’n / 100) with the format of ‘PERCENT12.2’, and assign it to the ‘Y axis’; assign the ‘Data Value’ to the ‘X axis.’ Now set the options of the numeric series plot with following settings:
    Object -> Title: No title;

    Style -> Line/Marker: (change the first color to purple)

    Graph Frame -> Grid lines: disabled

    Series -> Line thickness: 2

    X Axis options:

       Axis label: disabled

       Axis Line: disabled

       Tick value: disabled

    Y Axis options:

       Fixed minimum: 0

       Fixed maximum: 0.5

       Axis label: enabled

       Axis Line: enabled

       Tick value: enabled


       Visibility: Off

  1. Now we can start to overlay the two charts. As can be seen in the screenshot below, SAS Visual Analytics 8.3 provides a smart guide with precision container, which shows grids to help you align the objects in it. If you hold the ctrl button while dragging the numeric series plot to overlay the histogram, some fine grids displayed by the smart guide to help you with basic alignment. It is a little tricky though, to make the overlay precisely, you may fine tune the value of the Left/Top/Width/Height in the Layout of VA Options panel. The goal is to make the intersection of the axes coincides with each other.

After that, we can add a text object above the charts we just made, and done with the kernel density estimates superimposing on a histogram shown in below screenshot, similarly as we got from SAS Proc UNIVARIATE. (If you'd like to use PROC KDE UNIVAR statement for data density estimates, you can visualize it in SAS Visual Analytics in a similar way.)

To go further, I make a KDE with a scatter plot where we can also get impression of the data density with those little circles; another KDE plot with a needle plot where the data density is also represented by the barcode-like lines. Both are created in similar ways as described in above histogram example.

So far, I’ve shown you how I visualize KDE using SAS Visual Analytics. There are other approaches to visualize the kernel density estimates in SAS Visual Analytics, for example, you may create a custom graph in Graph Builder and import it into SAS Visual Analytics to do the visualization. Anyway, KDE is a good visualization in helping you understand more about your data. Why not give a try?

Visualizing kernel density estimates in SAS Visual Analytics was published on SAS Users.