sas programming

12月 132017
 

The SAS language is large. Even after 20+ years of using SAS, there are many features that I have never used. Recently it became necessary for me to learn about DICTIONARY tables in PROC SQL (and the associated SASHELP views) because I needed to programmatically obtain the text for the current value of the system title in SAS. I had heard a lot about DICTIONARY tables, but this was my first time using them in a program.

This article discusses DICTIONARY tables and shows how to use the DICTIONARY.Titles table to obtain the current value of titles and footnotes in SAS.

DICTIONARY tables and titles

DICTIONARY tables are documented in the PROC SQL documentation. They are special read-only PROC SQL tables that contain information about the current state of SAS, including the state of libraries, data sets, and system options. The documentation lists all DICTIONARY tables, and I determined that I needed to look at the DICTIONARY.Titles table in PROC SQL or (if I needed to use another SAS procedure) the SASHELP.VTitle view, which contains the same information.

After finding out which table to use, I wanted to display the contents of the table. The following call to PROC SQL displays the structure of the table (names and types of variables) and the contents. Equivalently, you can use PROC CONTENTS and PROC PRINT to show similar information for the view SASHELP.VTitle. The output shows the table for a new SAS session:

proc sql;
describe table Dictionary.Titles;         /* writes to SAS log */
select * from Dictionary.Titles;          /* display table */
quit;
 
proc contents data=Sashelp.VTitle;  run;  
proc print data=Sashelp.VTitle;     run;

The table contains three variables:

  • The TYPE variable is a one-character variable with the values 'T' (for title) or 'F' (for footnote).
  • The NUMBER variable is a numeric variable. The value '1' indicates the value of the TITLE1 or FOOTNOTE1 global statements. The value '2' indicates the value of the TITLE2 or FOOTNOTE2 statements, and so on.
  • The TEXT variable is a 256-character variable that contains the value of a title or footnote. The output shows that when you first start SAS, the TITLE1 statement is set to "The SAS System."

You can run an example to see how the contents of the view change after you submit TITLE and FOOTNOTE statements:

title "Normal Distribution";    /* alias for TITLE1 statement */
title2 "mu = 0; sigma = 1;";
footnote "N = 100";             /* alias for FOOTNOTE1 statement */
 
proc print data=Sashelp.VTitle;  run;

Putting a title into a macro variable

The structure of the DICTIONARY.Titles table implies that you can use a WHERE clause to subset the table. For example, the clause WHERE Type="T" & Number=1 selects only the row for the TITLE1 statement. Similarly, the clause WHERE Type="F" & Number=2 selects the row for the FOOTNOTE2 statement. You can use the SELECT INTO :MacroVar statement in PROC SQL to put data into a macro variable, as follows:

PROC SQL noprint;
select Text into :TitleText TRIMMED  /* put the trimmed value into a macro */
  from Dictionary.Titles
  where Type="T" & Number=1;
quit;
%put &=TitleText;
TITLETEXT=Normal Distribution

You need to be a little careful when using this technique in production code. If you ask for the TITLE2 information when that title is not set, then the WHERE clause will return an empty table. For example, if you clear the TITLE1 statement and rerun the previous PROC SQL statement you will see that the SAS log displays NOTE: No rows were selected and value of the TitleText macro is not updated.

One way to handle this potential problem is to use the %LET statement to set the macro variable to an empty value before you call PROC SQL. If the macro variable is empty after PROC SQL runs, then the requested title or footnote is not set. The following macro uses this technique to set a macro variable to the value of the Nth title, where you can specify the parameter N:

/* Get the N_th title into the &TitleText macro variable */
%macro GetTitle(Number=1);
%global TitleText;
%let TitleText = ;           /* value is empty if TITLEn does not exist */
PROC SQL noprint;
select Text into :TitleText TRIMMED    /* value is set if TITLEn exists */
  from Dictionary.Titles
  where Type="T" & Number=&Number;
quit;
%mend;
 
/* test the %GetTitle macro */
title "Lognormal Distribution";   /* set TITLE1; clear TITLE2 */
%GetTitle();
%put Title1 = "&TitleText";      /* text of title1 */
%GetTitle(Number=2);
%put Title2 = "&TitleText";      /* text of title1 */
Title1 = "Lognormal Distribution"
Title2 = ""

This basic macro is sufficient for my purposes. Feel free to propose improvements in the comments. Also, let me know how you use DICTIONARY tables in your work.

If you would like to learn more about DICTIONARY tables, the following two references will get you started. Many papers have been written about DICTIONARY tables and views. An internet search of the form
sas proceedings "dictionary table"
will reveal some of the papers from SAS conferences.

The post How to get the current TITLE in SAS appeared first on The DO Loop.

12月 132017
 

I'm a big fan of the Import Data task in SAS Enterprise Guide, especially for its support of text-based files (CSV, tab delimited, fixed width, and more). There's no faster method for generating SAS code that reads your data exactly the way you need it. I use the tool so often that I take for granted some of its neatest features, and I forget that many new users (and even veteran users) might not know about them. In this article, I'll review a few of the cool things that this task can do for you.

Read fixed-width text files into SAS

We think of CSV files (and...alas...Excel files) as the main standard for data exchange among systems, but many legacy systems still produce and consume fixed-width text data formats. The SAS DATA step is a perfect tool for reading these files, but defining the columns and their properties can be tedious. The "Fixed columns" option on the Import Data task can make this job simple.

Suppose that you're beginning with a spec like this:

And a raw data file like this:

You can use the Import Data wizard to define the boundaries of your columns by adding boundary lines with just click-and-drag operations. Beginning with the File->Import Data task, select your source text file and advance to the second page of the wizard. When you select "Fixed columns" as the input text format, you'll see a layout ruler that looks like this:

Click at the column boundaries (referring to your original spec!) and drag the rule lines as needed to define those column boundaries. Then click Next, and fill out details for the column names and types:

Which then tells the Import Data task how to generate the proper INPUT statements:

When you click Finish, you end up with a data set that's ready for business:

Modify the properties for multiple columns -- with one step

Here's a click-saving trick. Sometimes you have an input data file that contains many columns that share the same properties: type, length, and SAS format. It can be tedious to click and modify the properties of each column that you want to import. There's a shortcut on the Define Field Attributes page of the wizard that you can use to change the attributes for several columns at the same time. Simply SHIFT+Click to select multiple column definitions on the page, then click Modify.... The "Field Attributes for Multiple Selections" window appears, and you can change the necessary attributes just once and apply to the many items you picked.

This trick works as you import any text file or Excel file.

Create SAS program code that you can reuse anywhere

In a previous article I described how the Import Data task works "behind the scenes." Some of the magic that the task performs is not captured in SAS code, and that can present a challenge when you want to reuse this work in other settings -- for example, in a batch process or in a larger SAS program. However, with a couple of tweaks you can coerce the Import Data task into creating SAS code that you can almost just "lift and shift," as is.

The first option is hidden under the Performance window, labeled as "Bypass the data cleansing process." By default, the Import Data task reformats your input text file to normalize it for a cleaner import step. While doing no harm, most of the time this step isn't needed -- especially if your original data file is well formed. And since this step changes the input file, it's isn't repeatable outside of this task. My first tip for the best reusable code: click Performance... on the first page of the wizard, then select the "Bypass.." checkbox. That guarantees that the code will be formulated to read your original raw file. (Note that the Performance button is available only when importing text files, not Excel files.)

The second option you'll want to change is related to this, but you'll find it on the final page with the Advanced Options. Select "Generalize import step to run outside of SAS Enterprise Guide." This ensures that the task won't attempt any behind-the-scenes monkey business with your original file -- everything is captured in the DATA step that the task generates. Well, almost everything...

The one missing piece, a confounding factor when you select a local text file to import on a remote SAS Workspace session, is the transfer of the local file to the remote server. SAS Enterprise Guide copies the file for you -- behind the scenes -- and there is no SAS code to represent this step.

You can take control of even this step, though, if you make use of the Copy Files task (now available for you on the Tasks->Data menu). You can then copy the file from a local source folder, and land it wherever you want on the SAS server. Modify your newly repurposed Import Data code to pull from that server-based destination, giving you more control over the individual steps in the import process.

Learn more about importing text files

If you're new to importing data into SAS, whether using a SAS program or SAS Enterprise Guide, you might learn some of the basics from these video tutorials that were produced by SAS instructors:

12月 132017
 

I'm a big fan of the Import Data task in SAS Enterprise Guide, especially for its support of text-based files (CSV, tab delimited, fixed width, and more). There's no faster method for generating SAS code that reads your data exactly the way you need it. I use the tool so often that I take for granted some of its neatest features, and I forget that many new users (and even veteran users) might not know about them. In this article, I'll review a few of the cool things that this task can do for you.

Read fixed-width text files into SAS

We think of CSV files (and...alas...Excel files) as the main standard for data exchange among systems, but many legacy systems still produce and consume fixed-width text data formats. The SAS DATA step is a perfect tool for reading these files, but defining the columns and their properties can be tedious. The "Fixed columns" option on the Import Data task can make this job simple.

Suppose that you're beginning with a spec like this:

And a raw data file like this:

You can use the Import Data wizard to define the boundaries of your columns by adding boundary lines with just click-and-drag operations. Beginning with the File->Import Data task, select your source text file and advance to the second page of the wizard. When you select "Fixed columns" as the input text format, you'll see a layout ruler that looks like this:

Click at the column boundaries (referring to your original spec!) and drag the rule lines as needed to define those column boundaries. Then click Next, and fill out details for the column names and types:

Which then tells the Import Data task how to generate the proper INPUT statements:

When you click Finish, you end up with a data set that's ready for business:

Modify the properties for multiple columns -- with one step

Here's a click-saving trick. Sometimes you have an input data file that contains many columns that share the same properties: type, length, and SAS format. It can be tedious to click and modify the properties of each column that you want to import. There's a shortcut on the Define Field Attributes page of the wizard that you can use to change the attributes for several columns at the same time. Simply SHIFT+Click to select multiple column definitions on the page, then click Modify.... The "Field Attributes for Multiple Selections" window appears, and you can change the necessary attributes just once and apply to the many items you picked.

This trick works as you import any text file or Excel file.

Create SAS program code that you can reuse anywhere

In a previous article I described how the Import Data task works "behind the scenes." Some of the magic that the task performs is not captured in SAS code, and that can present a challenge when you want to reuse this work in other settings -- for example, in a batch process or in a larger SAS program. However, with a couple of tweaks you can coerce the Import Data task into creating SAS code that you can almost just "lift and shift," as is.

The first option is hidden under the Performance window, labeled as "Bypass the data cleansing process." By default, the Import Data task reformats your input text file to normalize it for a cleaner import step. While doing no harm, most of the time this step isn't needed -- especially if your original data file is well formed. And since this step changes the input file, it's isn't repeatable outside of this task. My first tip for the best reusable code: click Performance... on the first page of the wizard, then select the "Bypass.." checkbox. That guarantees that the code will be formulated to read your original raw file. (Note that the Performance button is available only when importing text files, not Excel files.)

The second option you'll want to change is related to this, but you'll find it on the final page with the Advanced Options. Select "Generalize import step to run outside of SAS Enterprise Guide." This ensures that the task won't attempt any behind-the-scenes monkey business with your original file -- everything is captured in the DATA step that the task generates. Well, almost everything...

The one missing piece, a confounding factor when you select a local text file to import on a remote SAS Workspace session, is the transfer of the local file to the remote server. SAS Enterprise Guide copies the file for you -- behind the scenes -- and there is no SAS code to represent this step.

You can take control of even this step, though, if you make use of the Copy Files task (now available for you on the Tasks->Data menu). You can then copy the file from a local source folder, and land it wherever you want on the SAS server. Modify your newly repurposed Import Data code to pull from that server-based destination, giving you more control over the individual steps in the import process.

Learn more about importing text files

If you're new to importing data into SAS, whether using a SAS program or SAS Enterprise Guide, you might learn some of the basics from these video tutorials that were produced by SAS instructors:

12月 052017
 

Finding a pattern like a phone number or national ID number embedded in text can be difficult and time consuming. The traditional DATA step has a family of functions (collectively referred to as PRX functions) that allow using Perl regular expressions in your SAS programs to make pattern search easier. [...]

The post Jedi SAS Tricks: Pattern Search in DS2 appeared first on SAS Learning Post.

12月 052017
 

The internet is rich with data, and much of that data seems to exist only on web pages, which -- for some crazy reason -- are designed for humans to read. When students/researchers want to apply data science techniques to analyze collect and analyze that data, they often turn to "data scraping." What is "data scraping?" I define it as using a program to fetch the contents of a web page, sift through its contents with data parsing functions, and save its information into data fields with a structure that facilitates analysis.

Python and R users have their favorite packages that they use for scraping data from the web. New SAS users often ask whether there are similar packages available in the SAS language, perhaps not realizing that Base SAS is already well suited to this task -- no special bundles necessary.

The basic steps for data scraping are:

  1. Fetch the contents of the target web page
  2. Process the source content of the page -- usually HTML source code -- and parse/save the data fields you need.
  3. If necessary, repeat for subsequent pages. This applies to those web sites that serve up lots of information in paginated form, and you want to collect all available pages of data.

Let's map these steps to the SAS programming language:

For this step Use these features
Get the contents of the web page FILENAME URL
Process/parse the web page contents DATA step, with parsing functions such as FIND, regular expressions via PRXMATCH. Use SAS informats to convert text to native data types.
Repeat across subsequent pages SAS macro language (%DO %UNTIL processing) or DATA step with asked about scraping data from the Center for Disease Control (CDC) web site.

Step 0: Find the original data source and skip the scrape

I'm writing this article at the end of 2017, and at this point in our digital evolution, web scraping seems like a quaint pastime. Yes, there are still some cases for it, which is why I'm presenting this article. But if you find information that looks like data on a web page, then there probably is a real data source behind it. And with the movement toward open data (especially in government) and data-driven APIs (in social media and commercial sites) -- there is probably a way for you to get to that data source directly.

In my example page for this post, the source page is hosted by CDC.gov, a government agency whose mandate is to share information with other public and private entities. As one expert pointed out, the CDC shares a ton of data at its dedicated data.cdc.gov site. Does this include the specific table the user wanted? Maybe -- I didn't spend a lot of time looking for it. However, even if the answer is No -- it's a simple process to ask for it. Remember that web sites are run by people, and usually these people are keen to share their information. You can save effort and achieve a more reliable result if you can get the official data source.

If you find an official data source to use instead, you might still want to automate its collection and import into SAS. Here are two articles that cover different techniques:

Web scraping is lossy, fragile process. The information on the web page does not include data types, lengths, or constraints metadata. And one tweak to the presentation of the web page can break any automated scraping process. If you have no other alternative and you're willing to accept these limitations, let's proceed to Step 1.

Step 1: Fetch the web page

In this step, we want to achieve the equivalent of the "Save As..." function from your favorite web browser. Just like your web browser, SAS can act as an HTTP client. The two most popular methods are:

  • FILENAME URL, which assigns a SAS fileref to the web page content. You can then use INFILE and INPUT to read that file from a DATA step.
  • PROC HTTP, which requires a few more lines of code to get and store the web page content in a single SAS procedure step.

I prefer PROC HTTP, and here's why. First, as a separate explicit step it's easier to run just once and then work with the file result over the remainder of your program. You're guaranteed to fetch the file just once. Though a bit more code, it makes it more clear about what happens under the covers. Next reason: PROC HTTP has been refined considerably in SAS 9.4 to run fast and efficient. Its many options make it a versatile tool for all types of internet interactions, so it's a good technique to learn. You'll find many more uses for it.

Here's my code for getting the web page for this example:

/* Could use this, might be slower/less robust */
* filename src url "https://wwwn.cdc.gov/nndss/conditions/search/";
 
/* I prefer PROC HTTP for speed and flexibility */
filename src temp;
proc http
 method="GET"
 url="https://wwwn.cdc.gov/nndss/conditions/search/"
 out=src;
run;

Step 2: Parse the web page contents

Before you can write code that parses a web page, it helps to have some idea of how the page is put together. Most web pages are HTML code, and the data that is "locked up" within them are expressed in table tags: <table>, <tr>, <td> and so on. Before writing the first line of code, open the HTML source in your favorite text editor and see what patterns you can find.

For this example from the CDC, the data we're looking for is in a table that has 6 fields. Fortunately for us, the HTML layout is very regular, which means our parsing code won't need to worry about lots of variation and special cases.

 <tr >
	<td style="text-align:left;vertical-align:middle;">
    	<a href="/nndss/conditions/chancroid/">
		Chancroid
		</a>
	</td>
	<td class="tablet-hidden"></td>
	<td class="tablet-hidden"></td>
	<td class="tablet-hidden"><i>Haemophilus ducreyi</i></td>
	<td  >1944 </td>
	<td  > Current</td>
</tr>

The pattern is simple. We can find a unique "marker" for each entry by looking for the "/nndss/conditions" reference. The line following (call it offset-plus-1) has the disease name. And the lines that are offset-plus-7 and offset-plus-8 contain the date ranges that the original poster was asking for. (Does this seem sort of rough and "hacky?" Welcome to the world of web scraping.)

Knowing this, we can write code that does the following:

Here's the code that I came up with, starting with what the original poster shared:

/* Read the entire file and skip the blank lines */
/* the LEN indicator tells us the length of each line */
data rep;
infile src length=len lrecl=32767;
input line $varying32767. len;
 line = strip(line);
 if len>0;
run;
 
/* Parse the lines and keep just condition names */
/* When a condition code is found, grab the line following (full name of condition) */
/* and the 8th line following (Notification To date)                                */
/* Relies on this page's exact layout and line break scheme */
data parsed (keep=condition_code condition_full note_to);
 length condition_code $ 40 condition_full $ 60 note_to $ 20;
 set rep;
 if find(line,"/nndss/conditions/") then do;
   condition_code=scan(line,4,'/');
   pickup= _n_+1 ;
   pickup2 = _n_+8;
   /* "read ahead" one line */
   set rep (rename=(line=condition_full)) point=pickup;
   condition_full = strip(condition_full);
 
   /* "read ahead" 8 lines */
   set rep (rename=(line=note_to)) point=pickup2;
   /* use SCAN to get just the value we want */
   note_to = strip(scan(note_to,2,'<>'));
 
   /* write this record */
   output;
  end;
run;

The result still needs some cleanup -- there are a few errant data lines in the data set. However, it should be simple to filter out the records that don't make sense with some additional conditions. I left this as a to-do exercise to the original poster. Here's a sample of the result:

Step 3: Loop and repeat, if necessary

For this example with the CDC data, we're done. All of the data that we need appears on a single page. However, many web sites use a pagination scheme to break the data across multiple pages. This helps the page load faster in the browser, but it's less convenient for greedy scraping applications that want all of the data at once. For web sites that paginate, we need to repeat the process to fetch and parse each page that we need.

Web sites often use URL parameters such as "page=" to indicate which page of results to serve up. Once you know the URL parameter that controls this offset, you can create a SAS macro program that iterates through all of the pages that you want to collect. There are many ways to accomplish this, but I'll share just one here -- and I'll use a special page from the SAS Support Communities (communities.sas.com) as a test.

Aside from the macro looping logic, my code uses two tricks that might be new to some readers.

/* Create a temp folder to hold the HTML files */
options dlcreatedir;
%let htmldir = %sysfunc(getoption(WORK))/html;
libname html "&htmldir.";
libname html clear;
 
/* collect the first few pages of results of this site */
%macro getPages(num=);
 %do i = 1 %to &num.;
   %let url = https://communities.sas.com/t5/custom/page/page-id/activity-hub?page=&i.;
   filename dest "&htmldir./page&i..html";
   proc http 
      method="GET"
	  url= "&url."
	  out=dest;
   run;
 %end;
%mend;
 
/* How many pages to collect */
%getPages(num=3);
 
/* Use the wildcard feature of INFILE to read all */
/* matching HTML files in a single step */
data results;
 infile "&htmldir./*.html" length=len lrecl=32767;
 input line $varying32767. len ;
 line = strip(line);
 if len>0;
run;

The result of this program is one long data set that contains the results from all of the pages -- that is, all of lines of HTML code from all of the web pages. You can then use a single DATA step to post-process and parse out the data fields that you want to keep. Like magic, right?

Where to learn more

Each year there are one or two "web scraping" case studies presented at SAS Global Forum. You can find them easily by searching on the keyword "scrape" from the SAS Support site or from lexjansen.com. Here are some that I found interesting:

How JMP 13 sees the CDC page

If you have access to JMP software from SAS, you could try the File->Internet Open... feature. It's useful for a one-time, ad-hoc parsing step that you can later capture into a JSL script for future use. With Internet Open you can specify a URL and select to open it "as HTML," and then JMP will offer a selection of available tables to import as data. This is definitely worth a try if you have JMP on your desktop, as it can lend some insight about the structure of the page you're trying to scrape.

JMP 13 version of the data, imported

The post How to scrape data from a web page using SAS appeared first on The SAS Dummy.

11月 302017
 

There are so many reasons why SAS programmers love SAS -- as a matter of fact, I wrote a blog on it back in 2012. I now realize that I could've written a whole series, not just a single post. And with the recent publishing of my first book, Big Data [...]

Why SAS programmers love SAS, revisited was published on SAS Voices by David Pope

11月 292017
 

SAS Viya is an exciting addition to the SAS Platform, allowing you to conduct analysis faster than ever before and providing you the flexibility to utilize open source technologies and generate insights from data in any computing environment. The blog post “Top 12 Advantages of SAS Viya” does a great [...]

The post Learn more about SAS Viya with resources from SAS Education appeared first on SAS Learning Post.

11月 222017
 

Social media has brought anniversary dates to the forefront. Every day, my view of Google Photos or Facebook shows me a collection of photos from exactly some number of years ago to remind me of how good things were back then. These apps are performing the simplest of date-based math to trick me. They think, "We have a collection of photos from this month/day from some previous year -- let's collect those together and then prey on Chris' nostalgia instincts."

Sometimes it works and I share them with friends. Weren't my kids so cute back then? It's true, they were -- but I don't pine for those days. I live in the present and I look towards the future -- as all citizens of the world should.

Looking back at dates in SAS

But you know who likes to look at the past? Managers (and probably a few of my colleagues...). Many of my SAS jobs are in support of date-based reports. They answer questions such as, "what happened in the past 30 days?" or "how much activity in the past 6 months?" When I have SAS date values, going back 30 days is simple. The SAS date for "30 days ago" is simply today()-30. (Because, remember, a SAS date is simply an integer representing the count of days since Jan 1, 1960.)

"6 months ago" is a little more nuanced. I could fudge it by subtracting 183 or 184 from the value of today(), but that's not precise enough for the analytical wonks that I work with. Fortunately, SAS provides a function that can compute any date -- given a starting date -- using just about any criteria you can imagine. The function is called INTNX (

six_mo_ago = intnx(
 'month',  /* unit of time interval */
  today(), /* beginning date */
  -6,      /* number of intervals, negative goes to the past */
  'same'   /* alignment of interval date. "Same" is for same day of month */
 );

If I wanted to go 6 years into the past, I would simply change that first argument to 'year'. 6 days? Change it to 'day'. in %SYSFUNC -- the macro function that breaks out of macro processing to invoke built-in SAS functions. I also need to remove the quotes around the interval and alignment values -- the SAS macro processor will treat these as string literals and would not approve of the quotes.

%let six_mo_ago = 
 %sysfunc(
  intnx(
   month,             /* unit of time interval */
   %sysfunc(today()), /* function to get current date */
   -6,                /* number of intervals, negative goes to the past */
   same               /* alignment of interval date. "Same" is for same day of month */
   )
  );
 %put &=six_mo_ago;

Output (from today's run):

45          %put &=six_mo_ago;
SIX_MO_AGO=20960

Notice the result is a number -- which is exactly what I need for date value comparisons. But what if I wanted a formatted version of the date? The

 %let six_mo_ago_fmt = 
  %sysfunc(
   intnx(
   month,             /* unit of time interval */
   %sysfunc(today()), /* function to get current date */
   -6,                /* number of intervals, negative goes to the past */
   same               /* alignment of interval date. "Same" is for same day of month */
   ), date9.  /* Tell %SYSFUNC how to format the result */
  );
 %put &=six_mo_ago_fmt;

Output (from today's run):

56          %put &=six_mo_ago_fmt;
SIX_MO_AGO_FMT=21MAY2017

Adapting INTNX for SAS datetime values

This computed date works perfectly when my data sets contain SAS date values that I want to filter. For example, I can limit the records to those from the past 6 months with code similar to this:

proc freq data=comm.visits (where=(date > &six_mo_ago.)) 
 noprint 
 ;
tables geo_country_code / out=country_visits_6mo;
run;

But I realized that some of my data sets use datetime values, not date values. A SAS datetime value is simply the number of seconds since midnight on Jan 1, 1960. I've seen programs that adapt the code above by converting the computed cutoff date from a date value to a datetime value -- it's simple to do with math:

proc freq data=comm.visits (where=(event_time > %sysevalf( &six_mo_ago. * 60 * 60 * 24 ))) 
 noprint 
 ;
tables geo_country_code / out=country_visits_6mo;
run;

The

%let six_mo_ago_dt = 
 %sysfunc(
  intnx(
   dtmonth,              /* unit of time interval */
   %sysfunc(datetime()), /* function to get current datetime */
   -6,                   /* number of intervals, negative goes to the past */
   same                  /* alignment of interval date. "Same" is for same day of month/same time */
   )
  );
 %put &=six_mo_ago_dt;

Output (at this moment):

34          %put &=six_mo_ago_dt;
SIX_MO_AGO_DT=1810991917.93526

I can then use the much more readable version of my comparison code:

proc freq data=comm.visits (where=(event_time > &six_mo_ago_dt)) 
 noprint 
 ;
tables geo_country_code / out=country_visits_6mo;
run;

Learning more about date and datetime intervals

First, an acknowledgment. I recently had the privilege of presenting at the Quebec user groups with SAS-world superstar Marje Fecht. Marje is an excellent instructor, and she delivered a popular talk about working with SAS date values -- a topic that trips up many new SAS users. It's useful for me -- someone who has used SAS for a long time -- to see a basic topic presented to me as if I were brand new. Her talk reminded me of some good practices that I was able to bring home and apply in my own production SAS jobs. I know that Marje was invited to present this talk at SAS Global Forum 2018, and I hope that I did not steal too much of her thunder. She has a good origin story about the 01JAN1960 date decision. You should attend the conference and see her talk in person.

The INTNX (and its sister function for computing date differences, INTCK) are powerful tools for manipulating date and datetime values. Other programming languages offer complex code libraries to accomplish what these two functions can do as part of Base SAS. They are tricky to learn at first, but once you get the hang of them they can really simplify your SAS programs that deal with time-based data.

Rick Wicklin presented a useful introduction to both functions in INTCK and INTNX: Two essential functions for computing intervals between dates in SAS.

While these functions are available in Base SAS, they are maintained by the developers who look after SAS/ETS (econometrics and time series). You'll find see this thread on the SAS Support Communities.

The post Computing a date from the past (or future) with SAS appeared first on The SAS Dummy.

11月 112017
 

Here's a Proc Print trick for grouped data. Suppose your data is divided into groups, such as males and females. You could sort by the grouping variable before printing, like this: Suppose you want to better emphasize the groups. You could add a BY statement, like this: OK, but, personally, [...]

The post Simple Proc Print trick for grouped data appeared first on SAS Learning Post.

11月 082017
 

A SAS programmer wanted to display a table in which the rows have different formats. An example is shown below. The programmer wanted columns that represent statistics and rows that represent variables. She wanted to display formats (such as DOLLAR) for some variables—but only for certain statistics. For example, the number of nonmissing observations (N) should never use the format of the variable, whereas the minimum, maximum, and mean values should.

Format rows of a table

In SAS you can easily apply a format to a column, but not to a row. However, SAS provides two techniques that enable you to control the appearance of cells in tables:

Both PROC TEMPLATE and PROC REPORT have a learning curve. If you are in a hurry, an alternative solution is to use the DATA step to create a factoid. Recall that a factoid is an ODS table that contains character values and (formatted) numbers in the same column. A factoid can display mixed-type data by converting every value to a string. This conversion process gives you complete control over the format of each cell in the table. Consequently, you can use context to format some numbers differently from other numbers.

Creating a custom factoid

Suppose that you want to generate the table shown at the top of this article. You can obtain the statistics from PROC MEANS and then transpose the data. The following statements produce an output data set that contains descriptive statistics for three variables in the Sashelp.Cars data:

proc means data=sashelp.cars noprint;
var Cylinders EngineSize MSRP;
output out=stats(drop=_TYPE_ _FREQ_
                 rename=(_STAT_=Label1)
                 where=(Label1 ^= "STD") );
run;
 
proc print data=stats noobs;
run;
Descriptive statistics with formats

I have highlighted the first row of the output data to indicate why this output is not suitable for a report. The output variables retain the format of the original variables. However, the 'N' statistic is the sample size, and a sample of size '$428' does not make sense! Even '428.000' is a strange value to see for a sample size. It would be great to format the first row with an intrger format such as 8.0.

The next DATA step creates three character variables (CVALUE1-CVALUE3) that contain formatted values of the three numerical variables in the data. The SELECT-WHEN statement does the following:

  • For each observation and for each of the three numerical variables:
    • If the LABEL1 variable is 'N', then apply the 8.0 format to the value of the numerical variable.
    • Otherwise use the VVALUE function to get the formatted value for the numerical variable.
data Factoid;
set stats;
array nValue[3] Cylinders EngineSize MSRP;      /* numerical variables */
array cValue[3] $12.;                           /* cValue[i] is formatted version of nValue[i] */
/* with a little effort you could assign labels dynamically */
label cValue1="Cylinders" cValue2="EngineSize" cValue3="MSRP";
 
do i = 1 to dim(nValue);
   select (Label1);
      when ('N')    cValue[i] = put(nvalue[i], 8.0);
      otherwise     cValue[i] = vvalue(nvalue[i]);
   end;
end;
run;
 
proc print data=Factoid label noobs;
   var Label1 cValue:;
run;

The output displays the character columns, which contain formatted numbers. With additional logic, you could display fewer decimal values for the MEAN row.

Transpose the factoid

The previous table contains the information that the programmer wants. However, the programmer asked to display the transpose of the previous table. For completeness, here are SAS statements that transpose the table:

proc transpose data=Factoid out=TFactoid;
   var cValue:;
   ID Label1;
   IDLABEL Label1;
run;
 
proc print data=TFactoid(drop=_NAME_) noobs label;
   label _LABEL_ = "Variable";
run;

The final table is displayed at the top of the article.

Summary

In summary, this article shows one way to display a column of data in which each cell potentially has a different format. The trick is to create a factoid, which means that you create a character copy of numeric data. As you create the copy, you can use the PUT function to apply a custom format, or you can use the VVALUE function to get the existing formatted value.

In general, you should try to avoid creating copies of data, so use PROC TEMPLATE and PROC REPORT if you know how. However, if you don't know how to use those tools, the factoid provides an alternative way to control the formatted values that appear in a table. Although not shown in this article, a factoid also enables you to display character and numeric values in the same column.

The post How to format rows of a table in SAS appeared first on The DO Loop.