Data Management

1月 042018
 

What will 2018 unveil for the data management market? I searched expert opinions on technology trends for 2018 and matched them against my own to uncover the five major trends that I think we’ll see in data management this year: 1. Data movement becomes more important. Cloud providers have proven [...]

Data management predictions for 2018 was published on SAS Voices by Helmut Plinke

12月 282017
 

At a time of unprecedented financial challenges – from payment shifts and regulatory mandates to aging populations – our nation’s hospitals are awash in data.  Clearly, there is a need for analytics capabilities to make health care more efficient, cost-effective and satisfying to patients and providers alike. In the October [...]

Building effective tools for population health management was published on SAS Voices by Alan Cudney

12月 212017
 

With SAS Data Management, you can set up SAS Data Remediation to manage and correct data issues. SAS Data Remediation allows user- or role-based access to data exceptions.

When a data issue is discovered it can be sent automatically or manually to a remediation queue where it can be corrected by designated users. The issue can be fixed from within SAS Remediation without the need of going to the affected source system. For more efficiency, the remediation process can also be linked to a purpose designed workflow.

It involves a few steps to set up a remediation process that allows you to correct data issues from within SAS Remediation:

  • Set up Data Management job to retrieve data and correct data in remediation.
  • Set up a Workflow to control the remediation process.
  • Register the remediation service.

Set up Data Management job to retrieve and correct data in remediation

To correct data issues from within Data Remediation we need two real-time data management jobs to retrieve and send data. The retrieve job will read the record in question to populate its data in the remediation UI and a send job to write the corrected data back to the data source or a staging area first.

Retrieve and sent job

If the following remediation fields are available in the retrieve or send job’s External Data Provider node, data will be passed to the fields. The field values can be used to identify and work with the correct record:

REM_KEY (internal field to store issues record id)
REM_USERNAME (the current remediation user)
1.  REM_ITEM_NAME
2.  REM_ISSUE_NAME
3.  REM_APPLICATION
4.  REM_SUBJECT_AREA
5.  REM_PACKAGE_NAME
6.  REM_STATUS
7.  REM_ASSIGNEE

Retrieve Action

The "retrieve” action occurs when the issue is opened in SAS Data Remediation. Data Remediation will only pass REM_ values to the data management job if the fields are present in the External Data Provider node. Although the REM_ values are the only way the data management job can communicate with SAS Data Remediation but they are not all required, meaning you can just call the fields in the External Data Provider node you need.

The job’s output fields will be displayed in the Remediation UI as edit fields to correct the issue record. it's best to use a Field Layout node as the last job node to pass out the wanted fields with desired labels.

Note: The retrieve job should only return one record.

A simple example of a retrieve job would be to have the issue record id coming from REM_KEY into the data management job to select the record from the source system.

Send Action

The “send” action occurs when pressing the “Commit Changes” button in the Data Remediation UI. All REM_ values in addition to the output fields of the retrieve job (the issue record edit fields) are passed to the send job. The job will receive values for those fields present in the External Data Provider node.

The send job can now work with the remediation record and save it to a staging area or submit it to the data source directly.

Note: Only one row will be sent as an input to the send job. Any data values returned by the send job will be ignored by Data Remediation.

Move jobs the Data Management Server

When both jobs are written, and tested you need to move them to Data Management Server into a Real-Time Data Services sub-directory for Data Remediation to call them.

When Data Remediation is calling the jobs, it will use the user credentials for the person logged on to Data Remediation. Therefore, you need to make sure that the jobs on Data Management Server have been granted the right permission.

Set up a Workflow to control the remediation process

Although you don’t need to involve a workflow in SAS Data Remediation but to improve efficiency it might be a good using one.

You can design your own workflow using SAS Workflow Studio or you can use a prepared workflow already coming with Data Remediation. You need to make sure that the desired workflow is loaded on to Workflow Server to link it to the Data Remediation Service.

Using SAS Workflow will help you to better control Data Remediation issues.

Register the remediation service

We can now register our remediation service in SAS Data Remediation. Therefore, we go to Data Remediation Administrator “Add New Client Application.”

Under Properties we supply an ID, which can be the name of the remediation service as long as it is unique, and a Display name, which is the name showing in the Remediation UI.

Next we set up the edit UI for the issue record. Under Issue User Interface we go to: User default remediation UI…. Using Data Management Server:

The Server address is the fully qualified address for Data Management Server including the port it is listening on. For example: http://dmserver.company.com:21036.

The Real-time service to retrieve item attributes and Real-time service to send item attributes needs to point to the retrieve/send job respectively on Data Management Server, including the job suffix .ddf as well as any directories under Real-Time Data Services where the jobs are stored.

 

 

 

 

 

 

Under the tab Subject Area, we can register different subject categories for this remediation service.  When calling the remediation service we can categorize different remediation issues by setting different subject areas.

Under the tab Issues Types, we can register issue categories. This enables us to categorize the different remediation issues.

At Task Templates/Select Templates you can set the workflow to be used for a particular issue type.

By saving the remediation service you will be able to use it. You can now assign data issues to the remediation service to efficiently correct the data and improve your data quality from within SAS Data Remediation.

Manage remediation issues using SAS Data Management was published on SAS Users.

12月 132017
 

SAS Data Preparation 2.1 is now available and it includes the ability to perform data quality transformations on your data using the definitions from the SAS Quality Knowledge Base (QKB).

The SAS Quality Knowledge Base is a collection of files which store data and logic that define data quality operations such as parsing, standardization, and generating match codes to facilitate fuzzy matching based on geographic locales. SAS software products reference the QKB when performing data quality transformations on your data.  These products include: SAS Data Integration Studio, SAS DataFlux Data Management Studio/Server, SAS code via dqprocs, SAS MDM, SAS Data Loader for Hadoop, SAS Event Stream Processing, and now SAS Data Preparation which is powered by SAS Viya.

Out-of-the-box QKB definitions include the ability to perform data quality operations on items such as Name, Address, Phone, and Email.

SAS Data Preparation 2.1

SAS Data Preparation – Data Quality Transformations

The following are the data quality transformations available in SAS Data Preparation:

  • Casing – case a text string in upper, lower, or proper case. Example using the Proper (Organization) case definition – input: sas institute   output: SAS Institute.
  • Parsing – break up a text string into its tokens. Example using the Name parse definition – input: James Michael Smith   output: James (Given Name token), Michael (Middle Name token), and Smith (Family Name token).
  • Field extraction – extract relevant tokens from a text string. Example using a custom created extraction definition for Clothing information – input: The items purchased were a small red dress and a blue shirt, large   output: dress; shirt (Item token), red; blue (Color token), and small; large (Size token).
  • Gender analysis – guess the gender of a text string. Example using the Name gender analysis definition – input: James Michael Smith   output: M (abbreviation for Male).
  • Identification analysis – guess the type of data for a text string. Example using the Contact Info identification analysis definition:
  • Match codes – generate a code to fuzzy match text strings. Example using the Name match definition at a sensitivity of 85:
    For more information on match codes, view this YouTube video on The Power of the SAS® Matchcode.
  • Standardize – put a text string into a common format. Example using the Phone standardization definition – input: 9196778000   output: (919) 677 8000.

Note:  While all the examples above are using definitions from the English (United States) locale in the SAS Quality Knowledge Base for Contact Information, QKBs are available for dozens of locales.

You can also customize the definitions in the QKB using SAS DataFlux Data Management Studio.  This allows you to update the out-of-the-box QKB definitions or create your own data types and definitions to suit your project needs.  For example, you may need to create a definition to extract the clothing information from a free-form text field as shown the Field extraction example.  These customized definitions can then be used in SAS Data Preparation as part of your data quality transformations.  For more information on Customizing the QKB, you can view this YouTube video.

For more information on the SAS Quality Knowledge Base (QKB), you can view its documentation.

SAS Data Preparation 2.1: Data quality transformations was published on SAS Users.

12月 072017
 

The primary obstacle to becoming a data-driven business is that data is not readily available, leaving valuable insights unused in data silos. To overcome this hurdle, today’s companies are creating a new role: Chief Data Officers (CDO). Responsible for unlocking insights hidden in data silos, the CDO is tasked with [...]

Are you ready for the Chief Data Officer challenge? was published on SAS Voices by Helmut Plinke

12月 052017
 

With SAS Data Management, you can setup SAS Data Remediation to manage and correct data issues. SAS Data Remediation allows user- or role-based access to data exceptions.

Last time I talked about how to register and use a Data Remediation service. This time we will look at how to use SAS Workflow together with SAS Data Remediation. SAS Workflow comes as part of SAS Data Remediation and you can use it to control the flow of a remediation issue to ensure that the right people are involved at the appropriate various steps.

SAS Workflow is very powerful and offers a lot of features to process data. You can, for example, design workflows to:

  • Process tasks (workflow steps) in parallel.
  • Follow different routes through a workflow depending on certain conditions.
  • Set a timer to automatically route to a different user if an assigned user isn’t available for a certain time.
  • Call Web Services (REST/SOAP) to perform some work automatically.
  • Send email notifications if a certain workflow step is reached.
    …and more.

Invoke an external URL via a workflow

In this article, we are going to look at a workflow feature which is specific to SAS Data Remediation.

With the latest version of Data Remediation there is a new feature that allows you to put a URL behind a workflow button and invoke it by clicking on the button in the Remediation UI. In the following we are going to take a closer look at this new feature.

When you have registered a Data Remediation service you can design a workflow, using SAS Workflow Studio, and link it to a Remediation Service. Workflow Studio comes with some sample workflows. It’s a good idea to take one of these sample workflows as a starting point and adding additional steps to it as desired.

Workflow Task

Data Remediation issues using SAS WorkflowIn a workflow, you have tasks. A task is a single step of a workflow and is associated with data objects, statuses, policies and participants.

  • Data Objects are fields to store values.
  • A Status transitions from one task to the next.
  • Policies are rules that will be executes at a certain task event. For example, calling a Web Service at the beginning of a task or send an email at the end of a task.
  • Participants are users or groups who can execute a task; i.e. if you are not a user assigned to a task, you can’t open an issue in Data Remediation to work on it.

When you add a new task to a workflow you must connect it to another task using a status. In the Data Remediation UI, a status will show up as a Workflow Button to transition from one task to another.

Assigning a URL to Status

You can also use a status on a task to call a URL instead of transitioning to the next task. Therefore, you add a status to a task but don’t use it to connect to another task.

At task Open you have four statuses assigned but only Associate, Cancel and Reject connect to other tasks. Status Review is not connected and it can be used to call a URL.

Right mouse click on status Review/Edit to open a dialogue box with a button Attributes… Here, you need to add the Key Attribute with name: URL. The value of URL points to the fully qualified name of the URL to be called:

http://MyServer/showAddtionalInformation/?recordid={Field1}

 

 

 

 

 

The URL can take parameters, in curly brackets (i.e. {Field1}), pointing to the task’s data objects. When calling the URL, the parameters will be substituted with the appropriate data object value. This way the URL call can be dynamic.

Dynamic URL calls via parameters

When you create a Data Remediation issue you can pass up to three values from the issue record to Remediation. For example, the issue record key. When the workflow is called by Data Remediation these values are copied from Data Remediation to the workflow’s top-level fields: Field1, Field2, Field3.

To use these values in the workflow, to pass them to a URL, you need to create an appropriate data object (Field1) at the task where you want to call a URL. You also need to add a policy to the task to copy the data object value from the top-level data object to the task data object.

This will make the value available at the task and you can use it as a parameter in the URL call as shown above.

Link workflow to Data Remediation Service

When you have finished the workflow, you can save it and upload it to the workflow server.

Once it is uploaded you can link the workflow to the Remediation Service. At Data Remediation Administration, open the remediation service and go to Issue Types. Workflows are linked to Issue types, they are not compulsory, but you can link one or more workflows to an issue type, depending on your business requirements.

At Issue Types, under Templates, select your workflow and link it to an issue type. You can make workflows mandatory for certain issue types by ticking the check box: “Always require a task template…” In this case Data Remediation expects one of the nominated workflows to be used when creating a remediation issue for the appropriate issue type.

Creating a remediation issue

You now can create a remediation issue for the appropriate issue type and assign a workflow to it by submitting the workflow name via the field "workflowName" in JSON structure when creating the remediation issue. See “Improving data quality through SAS Data Remediation” for more about creating Data Remediation issues.

Call a URL via workflow button

When the Remediation issue is created you can open it in SAS Data Remediation where you will see the Review Button. When you click on the Review button the URL will be called that you have assigned to the URL Key attribute value for status Review.

By using workflows with SAS Data Remediation, you can better control the process addressing data issues. Being able to put a URL behind a workflow button and invoke it will enhance your capabilities around solving the issues and improving data quality.

Process and control Data Remediation issues using SAS Workflow was published on SAS Users.

11月 302017
 

There are so many reasons why SAS programmers love SAS -- as a matter of fact, I wrote a blog on it back in 2012. I now realize that I could've written a whole series, not just a single post. And with the recent publishing of my first book, Big Data [...]

Why SAS programmers love SAS, revisited was published on SAS Voices by David Pope

11月 202017
 

Many SAS users have inquired about SAS Cloud Analytic Services’ (CAS) Distributed Network File System (Learn more about CAS.)

The “NFS” in “DNFS”

Let’s start at the beginning. The “NFS” in DNFS stands for “Network File System” and refers to the ability to share files across a network. As the picture below illustrates, a network file system lets numerous remote hosts access another host’s files.

Understanding DNFS

NFS

There are numerous network file system protocols that can be used for file sharing – e.g. CIFS, Ceph, Lustre – but the most common on Linux is NFS. While NFS is the dominant file-sharing protocol, the “NFS” part of the DNFS does not correspond to the NFS protocol. Currently all the DNFS supported file systems are based on NFS, but DNFS may support file systems based on other protocols in the future. So, it’s best to think of the “NFS” part of “DNFS” as a generic “network file system” (clustered file system) and not the specific NFS protocol.

The “D” in “DNFS”

The “D” in DNFS stands for “Distributed” but it does not refer to the network file system. By definition, that is already distributed since the file system is external to the machines accessing it. The “Distributed” in DNFS refers to CAS’ ability to use a network file system in a massively parallel way. With a supported file system mounted identically on each CAS node, CAS can access (both write) the file system’s CSV and SASHDAT files from every worker node in parallel.

This parallel file access is not an attribute of the file system, it is a capability of CAS. By definition, network file systems facilitate access at the file level, not the block level. With DNFS, CAS actively manages parallel block level I/O to the network file system, making sure file seek locations and block I/O operations do not overlap, etc.

DNFS

 

DNFS as CAS Backing Store

Not only can CAS perform multi-machine parallel I/O from network file systems, it can also memory-map NFS SASHDAT files directly into CAS. Thus, SASHDAT files on DNFS act as both the CASlib data source as well as the virtual memory “backing store,” often negating the need for CAS to utilize memory mapping (mmap()).

Note 1: Data transformations on load, such as row filtering and field selection, as well as encryption can trigger CAS_DISK_CACHE usage. Since the data must be transformed (subset and/or decrypted), CAS copies the transformed data into CAS_DISK_CACHE to support CAS processing.

Note 2: It is possible to use DNFS atop an encrypting file system or hardware storage device. Here, the HDAT blocks are stored encrypted but transmitted to the CAS I/O layer decrypted. Assuming no other transformations, CAS_DISK_CACHE will not be used in this scenario.

DNFS Memory Mapping

Performance Considerations

DNFS-based CAS loading will only be as fast as the slowest component involved. The chosen NFS architecture (hardware and CAS connectivity) should support I/O throughput commensurate with the CAS installation and in-line with the implementation’s service level agreements. So, while NetApp ONTAP clustering architecture. A different file system technology might look a little different but the same basic ideas will apply.

DNFS w/ Multi Machine File System

As described earlier, CAS manages the parallel I/O operations. Requests from CAS are sent to the appliance and handled by the NFS metadata server. The storage device implementing the NFS protocol points CAS DNFS to the proper file and block locations on the NFS data servers which pass the data to the CAS worker nodes directly.

Understanding DNFS was published on SAS Users.

10月 122017
 

With SAS Data Management, you can setup SAS Data Remediation to manage and correct data issues. SAS Data Remediation allows user- or role-based access to data exceptions.

When a data issue is discovered it can be sent automatically or manually to a remediation queue where it can be corrected by designated users.

Let’s look how to setup a remediation service and how to send issue records to Data Remediation.

Register the remediation service.

To register a remediation service in SAS Data Remediation we go to Data Remediation Administrator “Add New Client Application.

Under Properties we supply an ID, which can be the name of the remediation service as long as it is unique, and a Display name, which is the name showing in the Remediation UI.

Under the tab Subject Area, we can register different subject categories for this remediation service.  When calling the remediation service we can categorize different remediation issues by setting different subject areas. We can, for example, use the Subject Area to point to different Data Quality Dimensions like Completeness, Uniqueness, Validity, Accuracy, Consistency.

Under the tab Issues Types, we can register issue categories. This enables us to categorize the different remediation issues. For example, we can point to the affected part of record like Name, Address, Phone Number.

At Task Templates/Select Templates we can set a workflow to be used for each issue type. You can design your own workflow using SAS Workflow Studio or you can use a prepared workflow that comes with Data Remediation. You need to make sure that the desired workflow is loaded on to Workflow Server to link it to the Data Remediation Service. Workflows are not mandatory in SAS Data Remediation but will improve efficiency of the remediation process.

Saving the remediation service will make it available to be called.

Sending issues to Data Remediation.

When you process data, and have identified issues that you want to send to Data Remediation, you can either call Data Remediation from the job immediately where you process the data or you store the issue records in a table first and then, in a second step, create remediation records via a Data Management job.

To send records to Data Remediation you can call remediation REST API form the HTTP Request node in a Data Management job.

Remediation REST API

The REST API expects a JSON structure supplying all required information:

{
	"application": "mandatory",
	"subjectArea": "mandatory",
	"name": "mandatory",
	"description": "",
	"userDefinedFieldLabels": {
		"1": "",
		"2": "",
		"3": ""
	},
	"topics": [{
		"url": "",
		"name": "",
		"userDefinedFields": {
			"1": "",
			"2": "",
			"3": ""
		},
		"key": "",
		"issues": [{
			"name": "mandatory",
			"importance": "",
			"note": "",
			"assignee": {
				"name": ""
			},
			"workflowName": "",
			"dueDate": "",
			"status": ""
		}]
	}]
}

 

JSON structure description:

In a Data Management job, you can create the JSON structure in an Expression node and use field substitution to pass in the necessary values from the issue records. The expression code could look like this:

REM_APPLICATION= "Customer Record"
REM_SUBJECT_AREA= "Completeness"
REM_PACKAGE_NAME= "Data Correction"
REM_PACKAGE_DESCRIPTION= "Mon-Result: " &formatdate(today(),"DD MM YY") 
REM_URL= "http://myserver/Sourcesys/#ID=" &record_id
REM_ITEM_NAME= "Mobile phone number missing"
REM_FIELDLABEL_1= "Source System"
REM_FIELD_1= "CRM"
REM_FIELDLABEL_2= "Redord ID"
REM_FIELD_2= record_id
REM_FIELDLABEL_3= "-"
REM_FIELD_3= ""
REM_KEY= record_id
REM_ISSUE_NAME= "Phone Number"
REM_IMPORTANCE= "high"
REM_ISSUE_NOTE= "Violated data quality rule phone: 4711"
REM_ASSIGNEE= "Ben"
REM_WORKFLOW= "Customer Tag"
REM_DUE-DATE= "2018-11-01"
REM_STATUS= "open"
 
JSON_REQUEST= '
{
  "application":"' &REM_APPLICATION &'",
  "subjectArea":"' &REM_SUBJECT_AREA &'",
  "name":"' &REM_PACKAGE_NAME &'",
  "description":"' &REM_PACKAGE_DESCRIPTION &'",
  "userDefinedFieldLabels": {
    "1":"' &REM_FIELDLABEL_1 &'",
    "2":"' &REM_FIELDLABEL_2 &'",
    "3":"' &REM_FIELDLABEL_3 &'"
  },
  "topics": [{
    "url":"' &REM_URL &'",
    "name":"' &REM_ITEM_NAME &'",
    "userDefinedFields": {
      "1":"' &REM_FIELD_1 &'",
      "2":"' &REM_FIELD_2 &'",
      "3":"' &REM_FIELD_3 &'"
    },
    "key":"' &REM_KEY &'",
    "issues": [{
      "name":"' &REM_ISSUE_NAME &'",
      "importance":"' &REM_IMPORTANCE &'",
      "note":"' &REM_ISSUE_NOTE &'",
      "assignee": {
        "name":"' &REM_ASSIGNEE &'"
      },
      "workflowName":"' &REM_WORKFLOW &'",
      "dueDate":"' &REM_DUE_DATE &'",
      "status":"' &REM_STATUS &'"
    }]
  }]
}'

 

Tip: You could also write a global function to generate the JSON structure.

After creating the JSON structure, you can invoke the web service to create remediation records. In the HTTP Request node, you call the web service as follows:

Address:  http://[server]:[port]/SASDataRemediation/rest/groups
Method: post
Input Filed: The variable containing the JSON structure. I.e. JSON_REQUEST
Output Filed: A field to take the output from the web service. You can use the New button create a filed and set the size to 1000
Under Security… you can set a defined user and password to access Data Remediation.
In the HTTP Request node’s advanced settings set the WSCP_HTTP_CONTENT_TYPE options to application/json

 

 

 

You can now execute the Data Management job to create the remediation records in SAS Data Remediation.

Improving data quality through SAS Data Remediation was published on SAS Users.

8月 182017
 

SAS Viya deployments use credentials for accessing databases and other third-party products that require authentication. In this blog post, I will look at how this sharing of credentials is implemented in SAS Environment Manager.

In SAS Viya, domains are used to store the:

  • Credentials required to access external data sources.
  • Identities that are allowed to use those credentials.

There are three types of domains:

  • Authentication stores credentials that are used to access an external source that can then be associated with a caslib.
  • Connection used when the external database has been set up to require a User ID but no password.
  • Encryption stores an encryption key required to read data at rest in a path assigned to a caslib.

In this blog post we will focus on authentication domains which are typically used to provide access to data in a database management system. It is a pretty simple concept; an authentication domain makes a set of credentials available to a set of users. This allows SAS Viya to seamlessly access a resource. The diagram below shows a logical view of a domain. In this example, the domain PGAuth stores the credentials for a Postgres database, and makes those credentials available to two groups (and their members) and three users.

How does this work when a user accesses data in a database caslib? The following steps are performed:

1.     Log on to SAS Viya using personal credentials: the user’s identity is established including group memberships.

2.     Access a CASLIB for a database: using the user’s identity and the authentication domain of the CASLIB, Viya will look up the credentials associated with that identity in the domain.

3.     Two results are possible. A credential match is:

  • 1.     Found: the credentials are passed to the database authentication provider to determine access to the data.
  • 2.     Not found: no access to the data is provided.

To manage domains in SAS Environment Manager you must be an administrator. In SAS Environment Manager select Security > Domains. There are two views available:  Domains and Credentials. The Domains view lists all defined domains. You can access the credentials for a domain by right-clicking on the domain and selecting Credentials.

The Credentials view lists all credentials defined and the domains for which they are associated.

Whatever way you get to a credential, you can edit it by right-clicking and selecting Edit. In the edit dialog, you can specify the Identities (users and groups) that can use the credential, and the User ID and Password of the credential.  Note that only users who are already listed in the Identities field will be able to edit this field, so make sure you are in this field (directly or through group membership) prior to saving.

To use an authentication domain, you reference it in the CASLIB definition. When defining a non-path based CASLIB you must select a domain to provide user credentials to connect to the database server. This can be done when creating a new CASLIB in SAS Environment Manager in the Data > Libraries area.

If you use code to create or access your caslib, use the authenticationdomain option. In this example, we specify authenticationdomain in the table.addcaslib action.

If a user is not attached to the authentication domain directly, or through a group membership, they will not be able to access the credentials. An error will occur when they attempt to access the data.

This has been a brief look at storing and using credentials to access databases from SAS Viya. You can find  more detail in the SAS Viya Administration Guide in the section titled SAS Viya sharing credentials for database access was published on SAS Users.