Posted on behalf of SAS Press author Derek Morgan.

I was sitting in a model railroad club meeting when one of our more enthusiastic young members said, "Wouldn't it be cool if we could make a computer simulation, with trains going between stations and all. We could have cars and engines assigned to each train and timetables and…"

So, I thought to myself, “Timetables… I bet SAS can do that easily… sounds like something fun for Mr. Dates and Times."

As it turns out, the only easy part of creating a timetable is calculating the time. SAS handles the concept of elapsed time smoothly. It’s still addition and subtraction, which is the basis of how dates and times work in SAS. If a train starts at 6:00 PM (64,800 seconds of the day) and arrives at its destination 12 hours (43,200 seconds) later, it arrives at 6:00 AM the next day. The math is start time+duration=end time (108,000 seconds,) which is 6:00 AM the next day. It doesn’t matter which day, that train is always scheduled to arrive at 6:00 AM, 12 hours from when it left.

It got a lot more complicated when the group grabbed onto the idea. One of the things they wanted to do was to add or delete trains and adjust the timing so multiple trains don’t run over the same track at the same time. This wouldn’t be that difficult in SAS; just create an interactive application, but… I’m the only one who has SAS. So how do I communicate my SAS database with the “outside world”? The answer was Microsoft Excel, and this is where it gets thorny.

It’s easy enough to send SAS data to Excel using ODS EXCEL and PROC REPORT, but how could I get Excel to allow the club to manipulate the data I sent?
I used the COMPUTE block in PROC REPORT to display a formula for every visible train column. I duplicated the original columns (with corresponding names to keep it all straight) and hid them in the same spreadsheet. The EXCEL formula code is in line 8.

Compute Block Code:

I also added three rows to the dataset at the top. The first contains the original start time for each train, the second contains an offset, which is always zero in the beginning, while the third row was blank (and formatted with a black background) to separate it from the actual schedule.

Figure 1: Schedule Adjustment File

The users can change the offset to change the starting time of a train (Column C, Figure 2.) The formula in the visible columns adds the offset to the value in each cell of the corresponding hidden column (as long as it isn’t blank.) You can’t simply add the offset to the value of the visible cell, because that would be a circular reference.

The next problem was moving a train to an earlier starting time, because Excel has no concept of negative time (or date—a date prior to the Excel reference date of January 1, 1900 will be a character value in Excel and cause your entire column to be imported into SAS as character data.) Similarly, you can’t enter -1:00 as an offset to move the starting time of our 5:35 AM train to 4:35 AM. Excel will translate “-1:00” as a character value and that will cause a calculation error in Excel. In order to move that train to 4:35 AM, you have to add 23 hours to the original starting time (Column D, Figure 2.)

Figure 2: Adjusting Train Schedules

After the users adjust the schedules, it’s time to return our Excel data to SAS, which creates more challenges. In the screenshot above, T534_LOC is the identifier of a specific train, and the timetable is kept in SAS time values. Unfortunately, PROC IMPORT using DBMS=XLSX brings the train columns into SAS as character data. T534_LOC also imports as the actual Excel value, time as a fraction of a day.

Figure 3: How the Schedule Adjustment File Imports to SAS

While I can fix that by converting the character data to numeric and multiplying by 86,400, I still need the original column name of T534_LOC for the simulation, so I would have to rename each character column and output the converted data to the original column name. There are currently 146 trains spread across 12 files, and that is a lot of work for something that was supposed to be easy! Needless to say, this “little” side project, like most model railroads, is still in progress. However, this exercise in moving time data between Microsoft Excel and SAS gave me even more appreciation for the way SAS handles date and time data.

Figure 4 is a partial sample of the finished timetable file, generated as an RTF file using SAS. The data for trains 534 and 536 are from the spreadsheet in Figure 1.

Figure 4: Partial Sample Timetable

Want to learn more about how to use and manipulate dates, times, and datetimes in SAS? You'll find the answers to these questions and much more in my book The Essential Guide to SAS Dates and Times, Second Edition. Updated for SAS 9.4, with additional functions, formats, and capabilities, the Second Edition has a new chapter dedicated to the ISO 8601 standard and the formats and functions that are new to SAS, including how SAS works with Universal Coordinated Time (UTC). Chapter 1 is available as a free preview here.

For updates on new SAS Press books and great discounts subscribe to the SAS Press New Book Newsletter.

One of my favorite parts of summer is a relaxing weekend by the pool. Summer is the time I get to finally catch up on my reading list, which has been building over the year. So, if expanding your knowledge is a goal of yours this summer, SAS Press has a shelf full of new titles for you to explore. To help navigate your selection we asked some of our authors what SAS books were on their reading lists for this summer?

Teresa Jade, co-author of SAS® Text Analytics for Business Applications: Concept Rules for Information Extraction Models, has already started The DS2 Procedure: SAS Programming Methods at Work by Peter Eberhardt. Teresa reports that the book “is a concise, well-written book with good examples. If you know a little bit about the SAS DATA step, then you can leverage what you know to more quickly get up to speed with DS2 and understand the differences and benefits.”

### Derek Morgan

Derek Morgan, author of The Essential Guide to SAS® Dates and Times, Second Edition, tells us his go-to books this summer are Art Carpenter’s Complete Guide to the SAS® REPORT Procedure and Kirk Lafler's PROC SQL: Beyond the Basics Using SAS®, Third Edition. He also notes that he “learned how to use hash objects from Don Henderson’s Data Management Solutions Using SAS® Hash Table Operations: A Business Intelligence Case Study.”

### Chris Holland

Chris Holland co-author of Implementing CDISC Using SAS®: An End-to-End Guide, Revised Second Edition, recommends Richard Zink’s JMP and SAS book, Risk-Based Monitoring and Fraud Detection in Clinical Trials Using JMP® and SAS®, which describes how to improve efficiency while reducing costs in trials with centralized monitoring techniques.

### And our recommendations this summer?

Download our two new free e-books which illustrate the features and capabilities of SAS® Viya®, and SAS® Visual Analytics on SAS® Viya®.

Want to be notified when new books become available? Sign up to receive information about new books delivered right to your inbox.

### What is Item Response Theory?

Item Response Theory (IRT) is a way to analyze responses to tests or questionnaires with the goal of improving measurement accuracy and reliability.

A common application is in testing a student’s ability or knowledge. Today, all major psychological and educational tests are built using IRT. The methodology can significantly improve measurement accuracy and reliability while providing potential significant reductions in assessment time and effort, especially via computerized adaptive testing. For example, the SAT and GRE both use Item Response Theory for their tests. IRT takes into account the number of questions answered correctly and the difficulty of the question.

In recent years, IRT models have also become increasingly popular in health behavior, quality of life, and clinical research. There are many different models for IRT. Three of the most popular are:

Early IRT models (such as the Rasch model and two-parameter model) concentrate mainly on dichotomous responses. These models were later extended to incorporate other formats, such as ordinal responses, rating scales, partial credit scoring, and multiple category scoring.

### Item Response Theory Models Using SAS

Ron Cody and Jeffrey K. Smith’s book, Test Scoring and Analysis Using SAS, uses SAS PROC IRT to show how to develop your own multiple-choice tests, score students, produce student rosters (in print form or Excel), and explore item response theory (IRT).

Aimed at non-statisticians working in education or training, the book describes item analysis and test reliability in easy-to-understand terms and teaches SAS programming to score tests, perform item analysis, and estimate reliability.

For those with a more statistical background, Bayesian Analysis of Item Response Theory Models Using SAS describes how to estimate and check IRT models using the SAS MCMC procedure. Written especially for psychometricians, scale developers, and practitioners, numerous programs are provided and annotated so that you can easily modify them for your applications.

Assessment has played, and continues to play, an integral part in our work and educational settings. IRT models continue to be increasingly popular in many other ﬁelds, such as medical research, health sciences, quality-of-life research, and even marketing research. With the use of IRT models, you can not only improve scoring accuracy but also economize test administration by adaptively using only the discriminative items.

Interested in learning more? Check out our chapter previews available for free. Want to learn more about SAS Press? Explore our online bookstore and subscribe to our newsletter to get all the latest discounts, news, and more.

### Further resources

SAS Blogs:
New at SAS: Psychometric Testing by Charu Shankar
SAS author’s tip: Bayesian analysis of item response theory models

SAS Global Forum Paper:
Item Response Theory: What It Is and How You Can Use the IRTProcedure to Apply It by Xinming An and Yiu-Fai Yung

Understanding Item Response Theory with SAS was published on SAS Users.

Want to learn SAS programming but worried about taking the plunge? Over at SAS Press, we are excited about an upcoming publication that introduces newbies to SAS in a peer-review instruction format we have found popular for the classroom. Professors Jim Blum and Jonathan Duggins have written Fundamentals of Programming in SAS using a spiral curriculum that builds upon topics introduced earlier and at its core, uses large-scale projects presented as case studies. To get ready for release, we interviewed our new authors on how their title will enhance our SAS Books collection and which of the existing SAS titles has had an impact on their lives!

### What does your book bring to the SAS collection? Why is it needed?

Blum & Duggins: The book is probably unique in the sense that it is designed to serve as a classroom textbook, though it can also be used as a self-study guide. That also points to why we feel it is needed; there is no book designed for what we (and others) do in the classroom. As SAS programming is a broad topic, the goal of this text is to give a complete introduction of effective programming in Base SAS – covering topics such as working with data in various forms and data manipulation, creating a variety of tabular and visual summaries of data, and data validation and good programming practices.

The book pursues these learning objectives using large-scale projects presented as case studies. The intent of coupling the case-study approach with the introductory programming topics is to create a path for a SAS programming neophyte to evolve into an adept programmer by showing them how programmers use SAS, in practice, in a variety of contexts. The reader will gain the ability to author original code, debug pre-existing code, and evaluate the relative efficiencies of various approaches to the same problem using methods and examples supported by pedagogical theory. This makes the text an excellent companion to any SAS programming course.

### What is your intended audience for the book?

Blum & Duggins: This text is intended for use in both undergraduate and graduate courses, without the need for previous exposure to SAS. However, we expect the book to be useful for anyone with an aptitude for programming and a desire to work with data, as a self-study guide to work through on their own. This includes individuals looking to learn SAS from scratch or experienced SAS programmers looking to brush up on some of the fundamental concepts. Very minimal statistical knowledge, such as elementary summary statistics (e.g. means and medians), would be needed. Additional knowledge (e.g. hypothesis testing or confidence intervals) could be beneficial but is not expected.

### What SAS book(s) changed your life? How? And why?

Blum: I don’t know if this qualifies, but the SAS Programming I and SAS Programming II course notes fit this best. With those, and the courses, I actually became a SAS programmer instead of someone who just dabbled (and dabbled ineffectively). From there, many doors were opened for me professionally and, more importantly, I was able to start passing that knowledge along to students and open some doors for them. That experience also served as the basis for building future knowledge and passing it along, as well.

Duggins: I think the two SAS books that most changed my outlook on programming (which I guess has become most of my life, for better or worse) would either be The Essential PROC SQL Handbook for SAS Users by Katherine Prairie or Jane Eslinger's The SAS Programmer's PROC REPORT Handbook, because I read them at different times in my SAS programming career. Katherine's SQL book changed my outlook on programming because, until then, I had never done enough SQL to consistently consider it as a viable alternative to the DATA step. I had taken a course that taught a fair amount of SQL, but since I had much more experience with the DATA step and since that is what was emphasized in my doctoral program, I didn't use SQL all that often. However, after working through her book, I definitely added SQL to my programming arsenal. I think learning it, and then having to constantly evaluate whether the DATA step or SQL was better suited to my task, made me a better all-around programmer.

As for Jane's book - I read it much later after having used some PROC REPORT during my time as a biostatistician, but I really wasn't aware of how much could be done with it. I've also had the good fortune to meet Jane, and I think her personality comes through clearly - which makes that book even more enjoyable now than it was during my first read!

We at SAS Press are really excited to add this new release to our collection and will continue releasing teasers until its publication. For almost 30 years SAS Press has published books by SAS users for SAS users. Here is a free excerpt located on our Duggins' author page to whet your appetite. (Please know that this excerpt is an unedited draft and not the final content). Look out for news on this new publication, you will not want to miss it!

Want to find out more about SAS Press? For more about our books, subscribe to our newsletter. You’ll get all the latest news and exclusive newsletter discounts. Also, check out all our new SAS books at our online bookstore.

Often, the SAS 9.4 administration environment architecture can seem confusing to new administrators. You may be faced with questions like: What is a tier? Why are there so many servers? What is the difference between distributed and non-distributed installations?

Understanding SAS 9.4 architecture is key to tackling the tasks and responsibilities that come with SAS administration and will help you know where to look to make changes or troubleshoot problems. One of the ways I have come to think about SAS 9.4 architecture is to think of it like building a house.

So, what is the first thing you need to build a house? Besides money and a Home Depot rewards credit card, land is the first thing you need to put the house on. For SAS administration the land is your infrastructure and hardware, and the house you want to build on that land is your SAS software. You, the admin, are the architect. Sometimes building a house can be simple, so only one architect is needed. Other times, for more complex buildings, an entire team of architects is needed to keep things running smoothly.

Once the architect decides how the house should look and function, and the plans are signed off, the foundation is laid. In our analogy, this foundation is the SAS metadata server – the rest of the installation sits on top of it.

Next come the walls and ceilings for either a single-story ranch house (a distributed SAS environment) or a multi-story house (a non-distributed SAS environment). Once the walls are painted, the plumbing installed, and the carpets laid, you have a house made up of different rooms. Each room has a task: a kitchen to make food, a child’s bedroom to sleep in, and a living room to relax and be with family. Each floor and each room serve the same purpose as a SAS server – each server is dedicated to a specific task and has a specific purpose.
Finally, all of the items in each room, such as the bed, toys, and kitchen utensils can be equated to a data source: like a SAS data set, data pulled in from Hadoop or an Excel spreadsheet. Knowing what is in each room helps you find objects by knowing where they should belong.

Once you move into a house, though, the work doesn’t stop there, and the same is true for a SAS installation. Just like the upkeep on a house (painting the exterior, fixing appliances when they break, etc.), SAS administration requires maintenance to keep everything running smoothly.

### How this relates to SAS

To pull this analogy back to SAS, let us start with the different install flavors (single house versus townhouse, single story versus multiple stories). SAS can be installed either as a SAS Foundation install or as a metadata-managed install. A SAS Foundation install is the most basic (think Base SAS). A metadata-managed install is the SAS 9 Intelligence Platform, with many more features and functionality than Base SAS. With SAS Foundation, your users work on their personal machines or use Remote Desktop or Citrix. A SAS Foundation install does not involve a centrally metadata managed system, however in a metadata managed install, your users work on the dedicated SAS server. These two different SAS deployments can be installed on physical or virtual machines, and all SAS solution administration is based off of SAS 9.4 platform administration.

We hope you find this overview of SAS platform administration helpful. For more information check out this list of links to additional admin resources from my new book, SAS® Administration from the Ground Up: Running the SAS®9 Platform in a Metadata Server Environment.

Interested in making business decisions with big data analytics? Our Wiley SAS Business Series book Profit Driven Business Analytics: A Practitioner’s Guide to Transforming Big Data into Added Value by Bart Baesens, Wouter Verbeke, and Cristian Danilo Bravo Roman has just the information you need to learn how to use SAS to make data and analytics decision-making a part of your core business model!

This book combines the authorial team’s worldwide consulting experience and high-quality research to open up a road map to handling data, optimizing data analytics for specific companies, and continuously evaluating and improving the entire process.

In the following excerpt from their book, the authors describe a value-centric strategy for using analytics to heighten the accuracy of your enterprise decisions:

“'Data is the new oil' is a popular quote pinpointing the increasing value of data and — to our liking — accurately characterizes data as raw material. Data are to be seen as an input or basic resource needing further processing before actually being of use.”

### Analytics process model

In our book, we introduce the analytics process model that describes the iterative chain of processing steps involved in turning data into information or decisions, which is quite similar actually to an oil refinery process. Note the subtle but significant difference between the words data and information in the sentence above. Whereas data fundamentally can be defined to be a sequence of zeroes and ones, information essentially is the same but implies in addition a certain utility or value to the end user or recipient.

So, whether data are information depends on whether the data have utility to the recipient. Typically, for raw data to be information, the data first need to be processed, aggregated, summarized, and compared. In summary, data typically need to be analyzed, and insight, understanding, or knowledge should be added for data to become useful.

Applying basic operations on a dataset may already provide useful insight and support the end user or recipient in decision making. These basic operations mainly involve selection and aggregation. Both selection and aggregation may be performed in many ways, leading to a plentitude of indicators or statistics that can be distilled from raw data. Providing insight by customized reporting is exactly what the field of business intelligence (BI) is about.

Business intelligence is an umbrella term that includes the applications, infrastructure and tools, and best practices that enable access to — and analysis of — information to improve and optimize decisions and performance.

This model defines the subsequent steps in the development, implementation, and operation of analytics within an organization.

Step 1
As a first step, a thorough definition of the business problem to be addressed is needed. The objective of applying analytics needs to be unambiguously defined. Some examples are: customer segmentation of a mortgage portfolio, retention modeling for a postpaid Telco subscription, or fraud detection for credit cards. Defining the perimeter of the analytical modeling exercise requires a close collaboration between the data scientists and business experts. Both parties need to agree on a set of key concepts; these may include how we define a customer, transaction, churn, or fraud. Whereas this may seem self-evident, it appears to be a crucial success factor to make sure a common understanding of the goal and some key concepts is agreed on by all involved stakeholders.

Step 2
Next, all source data that could be of potential interest need to be identified. The golden rule here is: the more data, the better! The analytical model itself will later decide which data are relevant and which are not for the task at hand. All data will then be gathered and consolidated in a staging area which could be, for example, a data warehouse, data mart, or even a simple spreadsheet file. Some basic exploratory data analysis can then be considered using, for instance, OLAP facilities for multidimensional analysis (e.g., roll-up, drill down, slicing and dicing).

Step 3
After we move to the analytics step, an analytical model will be estimated on the preprocessed and transformed data. Depending on the business objective and the exact task at hand, a particular analytical technique will be selected and implemented by the data scientist.

Step 4
Finally, once the results are obtained, they will be interpreted and evaluated by the business experts. Results may be clusters, rules, patterns, or relations, among others, all of which will be called analytical models resulting from applying analytics. Trivial patterns (e.g., an association rule is found stating that spaghetti and spaghetti sauce are often purchased together) that may be detected by the analytical model is interesting as they help to validate the model. But of course, the key issue is to find the unknown yet interesting and actionable patterns (sometimes also referred to as knowledge diamonds) that can provide new insights into your data that can then be translated into new profit opportunities!

Step 5
Once the analytical model has been appropriately validated and approved, it can be put into production as an analytics application (e.g., decision support system, scoring engine). Important considerations here are how to represent the model output in a user-friendly way, how to integrate it with other applications (e.g., marketing campaign management tools, risk engines), and how to make sure the analytical model can be appropriately monitored and back-tested on an ongoing basis.

### Book giveaway!

If you are as excited about business analytics as we are and want a copy of Bart Baesens’ book Profit Driven Business Analytics: A Practitioner’s Guide to Transforming Big Data into Added Value, enter to win a free copy in our book giveaway today! The first 5 commenters to correctly answer the question below get a free copy of Baesens book! Winners will be contacted via email.

Here's the question:
What Free SAS Press e-book did Bart Baesens write the foreword too?

### Further resources

Want to prove your business analytics skills to the world? Check out our Statistical Business Analyst Using SAS 9 certification guide by Joni Shreve and Donna Dea Holland! This certification is designed for SAS professionals who use SAS/STAT software to conduct and interpret complex statistical data analysis.

For more information about the certification and certification prep guide, watch this video from co-author Joni Shreve on their SAS Certification Prep Guide: Statistical Business Analysis Using SAS 9.

Interested in making business decisions with big data analytics? Our Wiley SAS Business Series book Profit Driven Business Analytics: A Practitioner’s Guide to Transforming Big Data into Added Value by Bart Baesens, Wouter Verbeke, and Cristian Danilo Bravo Roman has just the information you need to learn how to use SAS to make data and analytics decision-making a part of your core business model!

This book combines the authorial team’s worldwide consulting experience and high-quality research to open up a road map to handling data, optimizing data analytics for specific companies, and continuously evaluating and improving the entire process.

In the following excerpt from their book, the authors describe a value-centric strategy for using analytics to heighten the accuracy of your enterprise decisions:

“'Data is the new oil' is a popular quote pinpointing the increasing value of data and — to our liking — accurately characterizes data as raw material. Data are to be seen as an input or basic resource needing further processing before actually being of use.”

### Analytics process model

In our book, we introduce the analytics process model that describes the iterative chain of processing steps involved in turning data into information or decisions, which is quite similar actually to an oil refinery process. Note the subtle but significant difference between the words data and information in the sentence above. Whereas data fundamentally can be defined to be a sequence of zeroes and ones, information essentially is the same but implies in addition a certain utility or value to the end user or recipient.

So, whether data are information depends on whether the data have utility to the recipient. Typically, for raw data to be information, the data first need to be processed, aggregated, summarized, and compared. In summary, data typically need to be analyzed, and insight, understanding, or knowledge should be added for data to become useful.

Applying basic operations on a dataset may already provide useful insight and support the end user or recipient in decision making. These basic operations mainly involve selection and aggregation. Both selection and aggregation may be performed in many ways, leading to a plentitude of indicators or statistics that can be distilled from raw data. Providing insight by customized reporting is exactly what the field of business intelligence (BI) is about.

Business intelligence is an umbrella term that includes the applications, infrastructure and tools, and best practices that enable access to — and analysis of — information to improve and optimize decisions and performance.

This model defines the subsequent steps in the development, implementation, and operation of analytics within an organization.

Step 1
As a first step, a thorough definition of the business problem to be addressed is needed. The objective of applying analytics needs to be unambiguously defined. Some examples are: customer segmentation of a mortgage portfolio, retention modeling for a postpaid Telco subscription, or fraud detection for credit cards. Defining the perimeter of the analytical modeling exercise requires a close collaboration between the data scientists and business experts. Both parties need to agree on a set of key concepts; these may include how we define a customer, transaction, churn, or fraud. Whereas this may seem self-evident, it appears to be a crucial success factor to make sure a common understanding of the goal and some key concepts is agreed on by all involved stakeholders.

Step 2
Next, all source data that could be of potential interest need to be identified. The golden rule here is: the more data, the better! The analytical model itself will later decide which data are relevant and which are not for the task at hand. All data will then be gathered and consolidated in a staging area which could be, for example, a data warehouse, data mart, or even a simple spreadsheet file. Some basic exploratory data analysis can then be considered using, for instance, OLAP facilities for multidimensional analysis (e.g., roll-up, drill down, slicing and dicing).

Step 3
After we move to the analytics step, an analytical model will be estimated on the preprocessed and transformed data. Depending on the business objective and the exact task at hand, a particular analytical technique will be selected and implemented by the data scientist.

Step 4
Finally, once the results are obtained, they will be interpreted and evaluated by the business experts. Results may be clusters, rules, patterns, or relations, among others, all of which will be called analytical models resulting from applying analytics. Trivial patterns (e.g., an association rule is found stating that spaghetti and spaghetti sauce are often purchased together) that may be detected by the analytical model is interesting as they help to validate the model. But of course, the key issue is to find the unknown yet interesting and actionable patterns (sometimes also referred to as knowledge diamonds) that can provide new insights into your data that can then be translated into new profit opportunities!

Step 5
Once the analytical model has been appropriately validated and approved, it can be put into production as an analytics application (e.g., decision support system, scoring engine). Important considerations here are how to represent the model output in a user-friendly way, how to integrate it with other applications (e.g., marketing campaign management tools, risk engines), and how to make sure the analytical model can be appropriately monitored and back-tested on an ongoing basis.

### Book giveaway!

If you are as excited about business analytics as we are and want a copy of Bart Baesens’ book Profit Driven Business Analytics: A Practitioner’s Guide to Transforming Big Data into Added Value, enter to win a free copy in our book giveaway today! The first 5 commenters to correctly answer the question below get a free copy of Baesens book! Winners will be contacted via email.

Here's the question:
What Free SAS Press e-book did Bart Baesens write the foreword too?

### Further resources

Want to prove your business analytics skills to the world? Check out our Statistical Business Analyst Using SAS 9 certification guide by Joni Shreve and Donna Dea Holland! This certification is designed for SAS professionals who use SAS/STAT software to conduct and interpret complex statistical data analysis.

For more information about the certification and certification prep guide, watch this video from co-author Joni Shreve on their SAS Certification Prep Guide: Statistical Business Analysis Using SAS 9.

As a SAS programmer, you are asked to do many things with your data -- reading, writing, calculating, building interfaces, and occasionally sending data outside of SAS. One of the most popular outputs you may be tasked with creating is likely a Microsoft Excel workbook. Have you ever heard, “just send me the spreadsheet”?

For an internal project the task is easy, just open the SAS ODS EXCEL destination, run PROC PRINT, and close SAS ODS EXCEL and the workbook spreadsheet is ready. But if the workbook or the spreadsheet is to be delivered somewhere else you may need to spruce it up a bit. Of course, you can manually change virtually everything on the spreadsheet, but that takes lots of employee time. And if the spreadsheet is delivered on a periodic basis, you may not run it the same every time.

### Saving you time and money

Suppose you run a PROC PRINT with a “BY” statement and produce a Microsoft Excel workbook with 100 pages. If each of those pages need to be printed and distributed to 100 clients by mail, do you want to be the person who changes each of those to print as a landscape printout? The SAS ODS Excel destination has over 125 options and sub-options that can perform various tasks while the workbook is being written. One such task sets the worksheet to print in “landscape” format.

As a programmer, I know that when I want to start a new project or learn new software, I look to two places in a book: the index and the table of contents. If I can think of a key word that might help me, I look to the index. But when searching general topics, I use the Table of Contents (TOC). It always frustrates me if the TOC is in alphabetical order, so I decided to write my TOC as groups of options and SAS commands that impacted similar parts or features of the Excel Workbook.

To see this in action, the bullet points I have listed below identify the major topic sections of the book. These are, in fact, chapter titles presented after the introduction:

• ODS Tagset versus Destination
• ODS Excel Destination Actions
• Setting Excel Document Property Values
• Options That Affect the Workbook
• Arguments that Affect Output Features
• Options That Affect Worksheet Features
• Options That Affect Print Features
• Column, Row, and Cell Features

### Take a look inside

Allow me to describe each of these topics in a few words:

ODS Tagset versus Destination
Many people have used the SAS ODS Tagset EXCELXP and will find many parts of the SAS ODS EXCEL destination to be very similar in both syntax and function. A tagset is Proc template code that can be changed by the user, while a SAS ODS destination is a built-in feature of SAS, much like a PROC or FUNCTION that cannot be changed by the users. This section also describes the ID feature of ODS which allows you to write more than one EXCEL workbook at a time.

ODS Excel Destination Actions
This area describes ODS features that may not be exclusive to ODS EXCEL but are useful in finding and choosing SAS outputs to be processed.

Setting Excel Document Property Values
Here you are shown how to change the comments, keywords, author, title, and other parts of the Excel Property sheet.

Options That Affect the Workbook
This section of the book shows you how to name the workbook, create blank worksheets, create a table of contents or index of worksheets within the workbook, change worksheet tab colors, and other options.

Arguments that Affect Output Features
The output features described here include changing the output style (coloration of the worksheet sections), finding and using stylesheet anchors, building and using Cascading Style Sheets, changing the Dots Per Inch (DPI) of the output data and or graphs, and adding text to the worksheet.

Options That Affect Worksheet Features
SAS has many options that describe the output data. These include titles, footnotes, and byline text. Additionally, SAS can group output worksheets in many ways including by page, by proc, by table, by by-group, or even no separation at all. Data can also be set to “FITTOPAGE” or the height or width can be selected along with adding sheet names or labels to the EXCEL output worksheets.

Options That Affect Print Features
Excel has many print features like printing in “black and white” only, centering horizontally or vertically, landscape or portrait, draft quality or standard, selecting the print order of the data, selecting the area to print, EXCEL headers and EXCEL footnotes, and others. All of which SAS can adjust as the workbook is being written.

Column, Row, and Cell Features
Finally, SAS can adjust column and row features like adding filters, changing the widths and heights or rows and columns, hiding rows or columns, inserting formulas, and even placing the data somewhere other than row one column one.

### Ready, set, go

My book Exchanging Data From SAS® to Excel: The ODS Excel Destination expands upon the SAS documentation by giving full descriptions and examples including SAS code and EXCEL output for nearly every option and sub-option of the SAS ODS EXCEL software. In addition to this blog, check out a free chapter of my book to get started making your worksheets beautifully formatted. Get ready to follow the money and make your reports come out perfect for publication in no time!

Making the most of the ODS Excel destination was published on SAS Users.

### Structuring a highly unstructured data source

Human language is astoundingly complex and diverse. We express ourselves in infinite ways. It can be very difficult to model and extract meaning from both written and spoken language. Usually the most meaningful analysis uses a number of techniques.

While supervised and unsupervised learning, and specifically deep learning, are widely used for modeling human language, there’s also a need for syntactic and semantic understanding and domain expertise. Natural Language Processing (NLP) is important because it can help to resolve ambiguity and add useful numeric structure to the data for many downstream applications, such as speech recognition or text analytics. Machine learning runs outputs from NLP through data mining and machine learning algorithms to automatically extract key features and relational concepts. Human input from linguistic rules adds to the process, enabling contextual comprehension.

Text analytics provides structure to unstructured data so it can be easily analyzed. In this blog, I would like to focus on two widely used text analytics techniques: information extraction and entity resolution.

### Information Extraction

Information Extraction (IE) automatically extracts structured information from an unstructured or semi-structured text data type -- for example, a text file, to create new structured text data. IE works at the sub-document level, in contrast with techniques such as categorization, that work at the document or record level. Therefore, the results of IE can further feed into other analyses, like predictive modeling or topic identification, as features for those processes. IE can also be used to create a new database of information. One example is the recording of key information about terrorist attacks from a group of news articles on terrorism. Any given IE task has a defined template, which is a (or a set of) case frame(s) to hold the information contained in a single document. For the terrorism example, a template would have slots corresponding to the perpetrator, victim, and weapon of the terroristic act, and the date on which the event happened. An IE system for this problem is required to “understand” an attack article only enough to find data corresponding to the slots in this template. Such a database can then be used and analyzed through queries and reports about the data.

In their new book, SAS® Text Analytics for Business Applications: Concept Rules for Information Extraction Models, authors Teresa Jade, Biljana Belamaric Wilsey, and Michael Wallis, give some great examples of uses of IE:

"One good use case for IE is for creating a faceted search system. Faceted search allows users to narrow down search results by classifying results by using multiple dimensions, called facets, simultaneously. For example, faceted search may be used when analysts try to determine why and where immigrants may perish. The analysts might want to correlate geographical information with information that describes the causes of the deaths in order to determine what actions to take."

Another good example of using IE in predictive models is analysts at a bank who want to determine why customers close their accounts. They have an active churn model that works fairly well at identifying potential churn, but less well at determining what causes the churn. An IE model could be built to identify different bank policies and offerings, and then track mentions of each during any customer interaction. If a particular policy could be linked to certain churn behavior, then the policy could be modified to reduce the number of lost customers.

Reporting information found as a result of IE can provide deeper insight into trends and uncover details that were buried in the unstructured data. An example of this is an analysis of call center notes at an appliance manufacturing company. The results of IE show a pattern of customer-initiated calls about repairs and breakdowns of a type of refrigerator, and the results highlight particular problems with the doors. This information shows up as a pattern of increasing calls. Because the content of the calls is being analyzed, the company can return to its design team, which can find and remedy the root problem.

### Entity Resolution and regular expressions

Entity Resolution is the technique of recognizing when two observations relate to the same entity (thing, person, company), despite having been described differently. And conversely, recognizing when two observations do not relate to the same entity, despite having been described similarly. For example, you are listed in one data base as S Roberts, Sian Roberts, S.Roberts. All refer to the same person but would be treated as different people in an analysis unless they are resolved (combined to one person).

Entity resolution can be performed as part of a data pre-processing step or as text analysis. Basically one helps resolve multiple entries (cleans the data) and the other resolves reference to a single entity to extract meaning, for example, pronoun resolution - when “it” refers to a particular company mentioned earlier in the text. Here is another example:

Assume each numbered item is a separate observation in the input data set:
1. SAS Institute is a great company. Our company has a recreation center and health care center for employees.
2. Our company has won many awards.
3. SAS Institute was founded in 1976.

The scoring output matches are below; note that the document ID associated with each match aligns with the number before the input document where the match was found.

### Unstructured data clean-up

In the following section we focus on the pre-processing clean-up of the data. Unstructured data is the most voluminous form of data in the world, and analysts rarely receive it in perfect condition for processing. In other words, textual data needs to be cleaned, transformed, and enhanced before value can be derived from it.

A regular expression is a pattern that the regular expression engine attempts to match in input. In SAS programming, regular expressions are seen as strings of letters and special characters that are recognized by certain built-in SAS functions for the purpose of searching and matching. Combined with other built-in SAS functions and procedures, such as entity resolution, you can realize tremendous capabilities. Matthew Windham, author of Unstructured Data Analysis: Entity Resolution and Regular Expressions in SAS®, gives some great examples of how you might use these techniques to clean your text data in his book. Here we share one of them:

"As you are probably familiar with, data is rarely provided to analysts in a form that is immediately useful. It is frequently necessary to clean, transform, and enhance source data before it can be used—especially textual data."

Extract, Transform, and Load (ETL) ETL is a general set of processes for extracting data from its source, modifying it to fit your end needs, and loading it into a target location that enables you to best use it (e.g., database, data store, data warehouse). We’re going to begin with a fairly basic example to get us started. Suppose we already have a SAS data set of customer addresses that contains some data quality issues. The method of recording the data is unknown to us, but visual inspection has revealed numerous occurrences of duplicative records. In this example, it is clearly the same individual with slightly different representations of the address and encoding for gender. But how do we fix such problems automatically for all of the records?

First Name Last Name DOB Gender Street City State Zip Robert Smith 2/5/1967 M 123 Fourth Street Fairfax, VA 22030 Robert Smith 2/5/1967 Male 123 Fourth St. Fairfax va 22030

Using regular expressions, we can algorithmically standardize abbreviations, remove punctuation, and do much more to ensure that each record is directly comparable. In this case, regular expressions enable us to perform more effective record keeping, which ultimately impacts downstream analysis and reporting. We can easily leverage regular expressions to ensure that each record adheres to institutional standards. We can make each occurrence of Gender either “M/F” or “Male/Female,” make every instance of the Street variable use “Street” or “St.” in the address line, make each City variable include or exclude the comma, and abbreviate State as either all caps or all lowercase. This example is quite simple, but it reveals the power of applying some basic data standardization techniques to data sets. By enforcing these standards across the entire data set, we are then able to properly identify duplicative references within the data set. In addition to making our analysis and reporting less error-prone, we can reduce data storage space and duplicative business activities associated with each record (for example, fewer customer catalogs will be mailed out, thus saving money).

Your unstructured text data is growing daily, and data without analytics is opportunity yet to be realized. Discover the value in your data with text analytics capabilities from SAS. The SAS Platform fosters collaboration by providing a toolbox where best practice pipelines and methods can be shared. SAS also seamlessly integrates with existing systems and open source technology.

Further Resources:
Natural Language Processing: What it is and why it matters

SAS® Text Analytics for Business Applications: Concept Rules for Information Extraction Models, by Teresa Jade, Biljana Belamaric Wilsey, and Michael Wallis

Unstructured Data Analysis: Entity Resolution and Regular Expressions in SAS®, by Matthew Windham

Text analytics explained was published on SAS Users.

Have you heard? SAS recently announced a new practical programming credential, SAS® Certified Specialist: Base Programming Using SAS® 9.4. This new practical programming credential is different than our previous exams. Now it requires you to take a performance-based exam, in which you access a SAS environment and then write and execute SAS code. Practical, right? At the end, your answers are scored for correctness.

This new SAS Certified Specialist credential will run in parallel with the current SAS Certified Base Programmer credential until June 2019. So make sure to check out the complete exam content guide for a list of objectives that are tested on the exam!

### A new certification guide

SAS is also releasing an accompanying certification guide to help you prepare for the new programming credential: SAS® Certified Specialist Prep Guide: Base Programming Using SAS® 9.4. This new certification guide has been streamlined to remove redundancy throughout the chapters and has been aligned with the courses, SAS® Programming 1: Essentials and SAS® Programming 2: Data Manipulation Techniques, along with the exam content guide.

The new certification guide also includes a workbook that provides programming scenarios to help you prepare for the performance-based portion of the exam. Make sure to also review the SAS® 9.4 Base Programming Exam Experience Tutorial to help you prepare!

Here are some of the changes made to the certification guide:

• Removal of INFILE/INPUT statements. These statements are no longer taught in SAS® Programming 1 and SAS® Programming 2 courses or tested in the exam. Thus, these statements have been replaced with the SET statement and the IMPORT procedure.
• Removal of arrays. Arrays are no longer taught in SAS® Programming 1 and SAS® Programming 2 courses or tested in the exam.
• Addition of the TRANSPOSE and EXPORT procedures, as well as macro variables. These additions are tested and are taught in the SAS® Programming 1 and SAS® Programming 2 courses.
• Updating of existing examples and the addition of more annotated examples that are easier to follow and have better quality graphics. These changes help to improve the readability of examples, and there is a closer relationship between the sample questions in the book and the questions that appear on the actual exam.
• Installation of sample data, and the way you work with SAS libraries has been simplified.
• A new companion piece, the Quick Syntax Reference Guide, is also available for download.

### More opportunities to test your skills

For even more programming practice check out Ron Cody’s Learning SAS by Example. This is full of practical examples, and exercises with solutions which allow you to test your programming skills for the exam.

Not ready for the certification exam just yet? You can also browse our books for getting started with SAS. There you will find, of course, The Little SAS Book: A Primer 5th Edition and Exercises and Projects for The Little SAS® Book, Fifth Edition. Both are great guides for new users wanting to learn SAS and practice before getting to the New Base Programming Specialist Exam!