Viya

1月 132020
 

Are you ready to get a jump start on the new year? If you’ve been wanting to brush up your SAS skills or learn something new, there’s no time like a new decade to start! SAS Press is releasing several new books in the upcoming months to help you stay on top of the latest trends and updates. Whether you are a beginner who is just starting to learn SAS or a seasoned professional, we have plenty of content to keep you at the top of your game.

Here is a sneak peek at what’s coming next from SAS Press.
 
 

For students and beginners

For beginners, we have Exercises and Projects for The Little SAS® Book: A Primer, Sixth Edition, the best-selling workbook companion to The Little SAS Book by Rebecca Ottesen, Lora Delwiche, and Susan Slaughter. Exercises and Projects for The Little SAS® Book, Sixth Edition will be updated to match the updates to the new The Little SAS® Book: A Primer, Sixth Edition. This hands-on workbook is designed to hone your SAS skills whether you are a student or a professional.

 

 

For data explorers of all levels

This free e-book explores the features of SAS® Visual Data Mining and Machine Learning, powered by SAS® Viya®. Users of all skill levels can visually explore data on their own while drawing on powerful in-memory technologies for faster analytic computations and discoveries. You can manually program with custom code or use the features in SAS® Studio, Model Studio, and SAS® Visual Analytics to automate your data manipulation and modeling. These programs offer a flexible, easy-to-use, self-service environment that can scale on an enterprise-wide level. This book introduces some of the many features of SAS Visual Data Mining and Machine Learning including: programming in the Python interface; new, advanced data mining and machine learning procedures; pipeline building in Model Studio, and model building and comparison in SAS® Visual Analytics

 

 

For health care data analytics professionals

If you work with real world health care data, you know that it is common and growing in use from sources like observational studies, pragmatic trials, patient registries, and databases. Real World Health Care Data Analysis: Causal Methods and Implementation in SAS® by Doug Faries et al. brings together best practices for causal-based comparative effectiveness analyses based on real world data in a single location. Example SAS code is provided to make the analyses relatively easy and efficient. The book also presents several emerging topics of interest, including algorithms for personalized medicine, methods that address the complexities of time varying confounding, extensions of propensity scoring to comparisons between more than two interventions, sensitivity analyses for unmeasured confounding, and implementation of model averaging.

 

For those at the cutting edge

Are you ready to take your understanding of IoT to the next level? Intelligence at the Edge: Using SAS® with the Internet of Things edited by Michael Harvey begins with a brief description of the Internet of Things, how it has evolved over time, and the importance of SAS’s role in the IoT space. The book will continue with a collection of chapters showcasing SAS’s expertise in IoT analytics. Topics include Using SAS Event Stream Processing to process real world events, connectivity, using the ESP Geofence window, applying analytics to streaming data, using SAS Event Stream Processing in a typical IoT reference architecture, the role of SAS Event Stream Manager in managing ESP deployments in an IoT ecosystem, how to use deep learning with Your IoT Digital, accounting for data quality variability in streaming GPS data for location-based analytics, and more!

 

 

 

Keep an eye out for these titles releasing in the next two months! We hope this list will help in your search for a SAS book that will get you to the next step in updating your SAS skills. To learn more about SAS Press, check out our up-and-coming titles, and to receive exclusive discounts make sure to subscribe to our newsletter.

Foresight is 2020! New books to take your skills to the next level was published on SAS Users.

5月 222019
 

This blog shows how the automatically generated concepts and categories in Visual Text Analytics (VTA) can be refined using LITI and Boolean rules. Because of these capabilities highly customized models can be developed in VTA. The rules used in this blog are basic. Developing linguistic rules and accurately categorizing documents requires subject matter expertise and understanding the grammatical structure of the language(s) used.

I will use a data set that contains information on 1527 randomly selected movies: their titles, reviews, MPAA Ratings, Main Genre classifications and Viewer Ratings. Two customized categories will be developed one for Children and the other for Sport movies. Because we are familiar with movies classification and MPAA ratings, it will be relatively easy to understand the rules used in this blog. The overall blog’s objective is to show how to formulate basic rules, thus their use can be extended to other fields.

SAS Visual Text Analytics (VTA) is the SAS offering designed to effectively extract insights from unstructured data in large scale. Offered on the SAS Viya architecture, VTA combines the power of Natural Language Processing (NLP), Machine Learning (ML) and Linguistic Rules. Currently, VTA supports 30 languages and it has an open architecture supporting 3rd-party programming interfaces.

As in all analytical projects, the discovery process in Text Analytics projects requires several iterations where the insights found in one iteration are used in the next iterations. In relationship to the linguistic rules, one must determine if the new rules are an improvement over the ones used in previous iterations, and find how many true positives and false positives are matched by the new rules. This process should be repeated until one obtains the precision required.

Initial Text Analysis Using Visual Analytics

Because Visual Analytics (VA) and VTA are highly integrated, the initial Text Analysis can be done in VA.

Every Text Analytics dataset must have a unique identifier associated to each document. In my blog, Discover Main Topics on #MLKDayofService Tweets Using SAS Visual Text Analytics, I showed how to set a “Unique Row Identifier”, and how to work with the nodes in the Pipeline.

In Visual Analytics (VA) one can do the initial analysis of text data, see the Word Cloud, and a list of topics. In the Options menu, I indicated I wanted a Maximum of Topics to Generate=7.

The photo above shows that there are 364 movies with the term “kid” in the Topic “+show,+kid, +rate,+movie”. We could build a category that groups appropriate movies for kids.

There are 203 documents with the Topic related to science fiction. Therefore, if I wanted to have a category for Sport movies, I would have to build it myself because sports terms appear in fewer documents.

Create a Visual Text Analytics Project

In VTA, a pipeline is a process flow diagram whose nodes represent tasks in the Text Analysis Process, I described in detail how to work with the nodes in my MLKDayofService blog mentioned above. Briefly, from the SAS Home menu select the action Build Models that will take you to SAS Model Studio, where you select and create a New Project.

The photo above shows the data role assignments done in the Data tab.

Notice that there is a Unique Row Identifier for each document, the Text Variable to analyze is Review, and two variables are used as Category: MPAARating and mainGenre. Later on, VTA will use these two variables to automatically create categories and their Boolean rules. Title doesn’t have a role but I want it to be displayed to facilitate the analysis.

Movies are already classified according to their main category (mainGenre), I want to see the Boolean rules that VTA automatically generates for each category, and if I can create new concepts and categories that improve on the initial categorization. For example, I would like:

  1. to find children movies that don’t contain violence,
  2. to find movies that are related to Sports,
  3. to read the reviews of my favorite old movies, and
  4. find movies whose reviews mention some of my favorite movie directors.

Method

I ran two pipelines. The first one had the default pipeline settings and also the option Include predefined concepts enabled for the Concepts node. The objective was to see the rules associated with the genres “Sports”, “Animated” and “Family”, the movies matched by these rules, as well as, the ones that shouldn’t have been matched. In the second pipeline, I developed LITI and Boolean rules with the objective of improving the default categorizations automatically produced in the first pipeline.

In the next sections I will describe how the new categories were built. In real business situations, sometimes we will have pre-defined categories available to us, and other times we will come up with categories that satisfy the business objectives after analyzing many documents.

Customized Concepts built in the Concept Node

In the second pipeline, I developed three customized concepts, I will use one of them “MySports” to build a new category later on.

Basic Boolean operators are used to define new concepts and categories. AND/NOT operators are applied to the whole document. There other operators that search within the same sentence (SENT), the same paragraph (PARA) or a number of terms (DIST).

# Any line that starts with “#” is a comment
# Use CLASSIFIER to match a literal sequence
# Use CONCEPT_RULE to use Boolean and proximity operators. The term extracted should use _c{ }

MySports Concept

I wrote this rule which matches 98 documents, most of them related to Sport movies and with few false positives. This rule will match a document if any of the terms sport, baseball, tennis, football, basketball, racetrack appear anywhere in the document (movie review) but the terms gambling, buddy or sporting do not appear anywhere in the document.

I will use this MySports CONCEPT_RULE to build the new Sports category:

CONCEPT_RULE: (AND, (OR, “_c{sport@}”,”_c{baseball}”,”_c{tennis}”,”_c{football}”,”_c{basketball}”, “_c{racetrack}”),(NOT,”sporting”),(NOT, “gambling”),(NOT,”buddy”))

filmmakersInReview Concept

I built this concept just to illustrate how to use a pre-defined concept, in this case nlpPerson:

CONCEPT_RULE: (DIST_10,(OR,”filmmaker”,”director”,”film producer”,”producer”,”movie maker”),”_c{nlpPerson}”)

favoriteMovies Concept

I built this concept to match my favorite old movies and one of my favorite directors. The first CONCEPT_RULE will match documents that contain in the same sentence the terms Stanley Kubrick and 2001. The second CONCEPT_RULE will match documents that contain the two terms anywhere in the document. Both CONCEPT_RULEs will only extract the first term "Stanley Kubrick":

CLASSIFIER:A Space Odyssey
CLASSIFIER:The Sound of Music
CLASSIFIER:Il Postino
CONCEPT_RULE:(SENT,”_c{Stanley Kubrick}”,”2001″)
CONCEPT_RULE:(AND,”_c{Stanley Kubrick}”,”A Space Odyssey”)

New Concepts in Parsing Text Node

The customized concepts developed in the Concepts node are passed to the Text Parsing Node. Notice the Terms football, sport, sports, baseball and the new Role MySports in the Kept Terms window, as well as the matched documents to the term "football":

Customized Categories in the Category Mode

In the second pipeline I developed new categories using as starting point the rules associated with the genres “Sports”, “Animated” and “Family”.

Sports Category

The input data has only 3 movies in the Sports category, it is difficult to generate a meaningful rule with such a small dataset. Once the first pipeline is ran, there are a total of 8 movies which include the 3 original ones, and 5 that are not related to sports. The automatically generated rule is:

(OR,(AND,”crowd-pleaser”),(AND,”conor”),(AND,”_x000d_stupid”))

For the second pipeline, I developed the MySports rule in the Concepts node as mentioned above, and write this Boolean rule in the Categories node:

(OR,(AND,”crowd-pleaser”),(AND,”conor”),(AND,”_x000d_stupid”),”[MySports]”)

The new rule matches 90 movies, most of them related to Sports. For the next iteration, one would need to look at the movies that don’t relate to Sports, the ones that relate to Sports and were not matched, and improve in the rule above.

ChildrenMovies Category

In the second pipeline, I combined the rules for the Family and Animation categories which were automatically produced in the first pipeline.

For the Family category, there were 6 movies matched by this rule

(OR,(AND,(OR,”adults”,”adult”),”oz”))

It matched “People vs Larry Flynt” which prompted me to use the terms “murder” and “obscenity” in the Concept rule.
The Animation category had 66 matched movies and the automatically generated rule was:

(OR,(AND,”pixar”),(AND,(OR,”animator”,”animators”)),(AND,(OR,”voiced”,”voices”,”voicing”,”voice”),(OR,”cartoon”,”cartoons”)),(AND,(OR,”cartoon characters”,”cartoon character”)),(AND,(OR,”lesson”,”lessons”),”animated”),(AND,”live action”),(AND,”jeffrey”,(OR,”features”,”feature”)),(AND,”3-d”))
I decided to modify these two rules. In the second pipeline, I used this new rule
(OR,(AND,(NOT,(OR,”adults”,”adult”,”suitable for children”,”rated R”,”strip@”,”suck@”,”crude humor”,”gore”,”horror”,”murder”,”obscenity”,”drug use@”)),”Wizard of Oz”),(AND,”pixar”),(AND,(OR,”animator”,”animators”)),(AND,(OR,”voiced”,”voices”,”voicing”,”voice”),(OR,”cartoon”,”cartoons”)),(AND,(OR,”cartoon characters”,”cartoon character”)),(AND,(OR,”lesson”,”lessons”),”animated”),(AND,”live action”),(AND,”jeffrey”,(OR,”features”,”feature”)),(AND,”3-d”))

This produced 73 movies and only two rated “R”. Therefore, I removed both the Animation and the Family categories and created the new category childrenMovies.

Again, to determine if the new rules are an improvement over the previous ones, we must find out how many true positives and false positives are matched by the new rules, and repeat the process until we obtain the precision required.

Conclusion

Because the automatically generated concepts and categories in Visual Text Analytics (VTA) can be refined using LITI and Boolean rules, highly customized models can be developed in VTA.

As in all analytical projects, the discovery process in Text Analytics projects requires several iterations where the insights found in one iteration are used in the next iterations. In relationship to the linguistic rules, one must determine if the new rules are an improvement over the ones used in previous iterations, and find how many true positives and false positives are matched by the new rules. This process should be repeated until one obtains the precision required.

Many thanks to Teresa Jade and Biljana Belamaric Wilsey for reviewing the linguistic rules used in this blog. For more information about Visual Text Analytics, please check out:

Analysis of Movie Reviews using Visual Text Analytics was published on SAS Users.

4月 052019
 

When working with files like SAS programs, images, documents, logs, etc., we are used to accessing them in operating system directories. In Viya, many of these files are not stored on the file-system. In this blog, we will look at where and how files are stored in Viya, and how to manage them.

In Viya, external files are stored in the Infrastructure Data server and accessed using the file micro-service. The file service manages a wide variety of file types including, images, html files, text files, csv files, SAS programs, media files, pdfs, office documents and logs. The file service is not a complete file system, but rather a method of accessing individual files stored in the infrastructure data server identified via their URI (a unique identifier).

Some files may also be accessible in Viya folders. The file service stores resources like images, SAS programs, etc. in the Infrastructure Data server, but they are accessible from Viya folders in the Content Area of Environment Manager or in SAS Drive.

The screenshot below shows the properties of an image file that was uploaded to a Viya folder using SAS Drive. You can see the filename, the URI (including the ID) and the location in the folder structure.

For files that are surfaced in folders, we can manage the files (view, copy, move, delete etc.) using SAS Environment Manager Content area, SAS Drive or the Folders command line interface. To learn more, watch this video from the SAS Technical Insights and Expertise Series.

However, many system generated files are managed by the file service and stored in the infrastructure data server, but are not accessible from the folders interfaces. A good example of system generated files that an administrator may need to manage are logs.

Many times when processing in Viya, logs are created and stored in the infrastructure data server. For example, when CAS table state management jobs, data plans, file imports, model or scoring processes are executed, a log is generated and stored by the file service.
How do we manage these files? While the files generated are mostly small in size, a large active Viya system with a long history will need management of the log and other files stored in the infrastructure data server.

One way we can access these files is using the REST API of the file service.

You can use your favorite tool to access the REST API. In this blog, I will use the GEL pyviyatools, initially the basic callrestapi tool. For more details on the pyviyatools, read this blog or go directly to the SAS GitHub site.

To get a list of all files are stored in the infrastructure data server send a GET request to the /files/files endpoint of the files service.

/./callrestapi.py -m get -e /files/files -o simple/

Partial output shows the image file referenced above and what could be a system generated log file.
=====Item 0 =======
contentType = image/jpeg
createdBy = geladm
creationTimeStamp = 2019-01-24T14:40:15.085Z
description = None
encoding = UTF-8
id = 6d2995b3-0fa6-4090-a338-1fbfe51fb26b
modifiedBy = geladm
modifiedTimeStamp = 2019-01-25T17:59:28.500Z
name = company_logo.JPG
properties = {}
size = 327848
version = 2

=====Item 164 =======
contentType = text/plain
createdBy = sasadm
creationTimeStamp = 2019-01-31T12:35:00.902Z
description = None
encoding = UTF-8
id = e4d4c83d-8677-433a-a8d3-b03bf00a5768
modifiedBy = sasadm
modifiedTimeStamp = 2019-01-31T12:35:09.342Z
name = 2019-01-31T12:35.823Z-results.log
parentUri = /jobExecution/jobs/380d3a0c-1c31-4ada-bf8d-4db66e786669
properties = {}
size = 2377
version = 2

To see the content of a file, use a GET request, the id of the file and the content endpoint. The content of the log file is displayed in the call below.

/./callrestapi.py -m get -e /files/files/01eb020f-468a-49df-a05a-34c6f834bfb6/content/

This will display the file contents. The output shows the log from a CAS table state management job.

———-JOB INFORMATION———-

Job Created: 2019-01-31T12:35.823Z
Job ID: 380d3a0c-1c31-4ada-bf8d-4db66e786669
Heartbeat interval: PT0.3S
Job expires after: PT168H
Running as: sasadm
Log file: /files/files/e4d4c83d-8677-433a-a8d3-b03bf00a5768 [ 2019-01-31T12:35.823Z-results.log ]
Arguments:

Options:
{
“enabled” : true,
“type” : “LOAD”,
“settings” : {
“refresh” : false,
“refreshMode” : “newer”,
“refreshAccessThreshold” : 0,
“varChars” : false,
“getNames” : true,
“allowTruncation” : true,
“charMultiplier” : 2,
“stripBlanks” : false,
“guessRows” : 200,
“scope” : “global”,
“encoding” : “utf-8”,
“delimiter” : “,”,
“successJobId” : “”
},
“selectors” : [ {
“serverName” : “cas-shared-default”,
“inputCaslib” : “hrdl”,
“outputCaslib” : “hrdl”,
“filter” : “or(endsWith(tableReference.sourceTableName,’.sashdat’), endsWith(tableReference.sourceTableName,’.SASHDAT’),\nendsWith(tableReference.sourceTableName,’.sas7bdat’),\nendsWith(tableReference.sourceTableName,’.csv’)\n)”,
“settings” : { }
} ]
}

————————————————–

Created session cas-shared-default: d4d1235a-6219-4649-940c-e67526be31ed (CAS Manager:Thu Jan 31 07:35:01 2019)
Using session d4d1235a-6219-4649-940c-e67526be31ed
Server: cas-shared-default
Input caslib: hrdl
Output caslib: hrdl
Effective Settings:

———-LOAD STARTING———-

— loaded –> unloaded – HR_SUMMARY unloaded – HR_SUMMARY_NEW unloaded – HRDATA unloaded – PERFORMANCE_LOOKUP <– performance_lookup.sas7bdat
Access denied.
———-LOAD COMPLETE———-

———-CLEANUP STARTING———-

Session deleted: cas-shared-default: d4d1235a-6219-4649-940c-e67526be31ed
———-CLEANUP COMPLETE———-

Final Job State: completed
Log file:/files/files/e4d4c83d-8677-433a-a8d3-b03bf00a5768

I started my journey into the file service to discover how an administrator could manage the log files. In reviewing the files and their contents using the REST API, I discovered that there was no easy way to uniquely identify a log file. The table below shows the attributes of some of log files from the GEL Shared Viya environment.

As this post has illustrated, file service REST API can be used to list and view the files. It can also be used for other file management activities. To help administrators manage files that are not visible via a user interface, a couple of new tools have been added to the pyiyatools.

listfiles.py provides an easy interface to query what files are currently stored in the infrastructure data server. You can sort files by size or modified date, and query based on date modified, user who last modified the file, parentUri or filename. The output provides the size of each file so that you can check the space being used to store files. Use this tool to view files managed by the file service and stored in the infrastructure data server. You can use lisfiles.py -h to see the parameters.

For example, if I want to see all potential log files older than 6 days old created by the /jobexecution service, I would use:

/./listfiles.py -n log -p /jobExecution -d 6 -o csv/

The output is a list of files in csv format:

id ,name ,contentType ,documentType ,createdBy ,modifiedTimeStamp ,size ,parentUri

“f9b11468-4417-4944-8619-e9ea9cd3fab8″,”2019-01-25T13:35.904Z-results.log”,”text/plain”,”None”,”sasadm”,”2019-01-25T13:35:09.490Z”,”2459″,”/jobExecution/jobs/37536453-fe2f-41c2-ba63-ce72737e482c”

“dffdcc97-3fb0-47c1-a47d-e6c8f24f8077″,”2019-01-25T12:35.869Z-results.log”,”text/plain”,”None”,”sasadm”,”2019-01-25T12:35:08.739Z”,”2459″,”/jobExecution/jobs/99708443-2449-40b7-acc3-c313c5dbca23″

“5fa889b6-93fb-4496-98ba-e0c055ca5999″,”2019-01-25T11:35.675Z-results.log”,”text/plain”,”None”,”sasadm”,”2019-01-25T11:35:09.118Z”,”2459″,”/jobExecution/jobs/eb182f88-4853-41f4-be24-225176991e8a”

“87988659-3c2d-4602-b61a-8042b34022ac”,”2019-01-25T10:35.657Z-results.log”,”text/plain”,”None”,”sasadm”,”2019-01-25T10:35:09.881Z”,”2459″,”/jobExecution/jobs/73fffe47-a7ef-4c1d-b7bf-83ef0b86319e”

archivefiles.py allows you to read files from the file service and save them to a directory on the file system. Optionally, the tool will also delete files from the file service to free up space. For example, if I want to archive all the files I listed above, I would use:

/./archivefiles.py -n log -d 6 -p /job -fp /tmp/

This tool will create a timestamp directory under /tmp and save a copy of each file in the directory.

If you want to archive and delete, add the -x option.

IMPORTANT: Use the archive tool carefully. We recommend that you run a Viya Backup prior to running the tool to delete files.

Now you know where your files are and you have some help with managing them. For the full details of how you can manage files using the file service REST API you can view the file service REST API documentation on developer.sas.com. If you would like to suggest any changes to the existing tools, please enter a suggestion on GitHub.

Where are my Viya files? was published on SAS Users.