Scott McCauley

2月 122019
 

Multi-tenancy is one of the exciting new capabilities of SAS Viya. Because it is so new, there is quite a lot of misinformation going around about it. I would like to offer you five key things to know about multi-tenancy before implementing a project using this new paradigm.

All tenants share one SAS Viya deployment

Just as apartment units exist within a larger, common building, all tenants, including the provider, exist within one, single SAS Viya deployment. Tenants share some SAS Viya resources such as the physical machines, most microservices, and possibly the SAS Infrastructure Data Server. Other SAS Viya resources are duplicated per tenant such as the CAS server and compute launcher. Regardless, the key point here is that because there is one SAS Viya deployment, there is one, and only one, SAS license that applies to all tenants. Adding a new tenant to a multi-tenant deployment could have licensing ramifications depending upon how the CAS server resources are allocated.

Decision to use multi-tenancy must be made at deployment time

Many people, myself included, are not very comfortable with commitment. Making a decision that cannot be changed is something we avoid. Deciding whether your SAS Viya deployment supports multi-tenancy cannot be put off for later.

This decision must be made at the time the software is deployed. There is currently no way to convert a multi-tenant deployment to a single-tenant deployment or vice versa short of redeployment, so choose wisely. As with marriage, the decision to go single-tenant or multi-tenant should not be taken lightly and there are benefits to each configuration that should be considered.

Each tenant is accessed by separate login

Let’s return to our apartment analogy. Just as each apartment owner has a separate key that opens only the apartment unit they lease, SAS Viya requires users to log on (authenticate) to a specific tenant space before allowing them access.

SAS Viya facilitates this by accessing each tenant by way of a separate sub-domain address. As shown in the diagram below, a user wishing to use the Acme tenant must access the deployment with a URL of acme.viya.sas.com while a GELCorp user would use a URL of gelcorp.viya.sas.com.

This helps create total separation of tenant access and allows administrators to define and restrict user access for each tenant. It does, however, mean that each tenant space is authenticated individually and there is no notion of single sign-on between tenants.

No content is visible between tenants

You will notice in both images above that there are brick walls between each of the tenants. This is to illustrate how tenants are completely separated from one another. One tenant cannot see any other tenant’s content, data, users, groups or even that other tenants exist in the system.

One common scenario for multi-tenancy is to keep business units within a single corporation separated. For example, we could set up Sales as a tenant, Finance as a tenant, and Human Resources as a tenant. This works very well if we want to truly segregate the departments' work. But what happens when Sales wants to share a report with Finance or Finance wants to publish a report for the entire company to view?

There are two options for this situation:
• We could export content from one tenant and import it into the other tenant(s). For example, we would export a report from the Sales tenant and import it into the Finance tenant, assuming that data the report needs is available to both. But now we have the report (and data) in two places and if Sales updates the report we must repeat the export/import process.
• We could set up a separate tenant at the company level for shared content. Because identities are not shared between tenants, this would require users to log off the departmental tenant and log on to the corporate tenant to see shared reports.

There are pros and cons to using multi-tenancy for departmental separation and the user experience must be considered.

Higher administrative burden

Managing and maintaining a multi-tenancy deployment is more complex than taking care of a single-tenant deployment. Multi-tenancy requires additional CAS servers, additional micro-services, possibly additional machines, and multiple administrative personas. The additional resources can complicate backup strategies, authorization models, operating system security, and resource management of shared resources.

There are also more levels of administration which requires an administrator persona for the provider of the environment and separate administrator personas for each tenant. Each of these administration personas have varying scope into which aspects of the entire deployment they can interact with. For example, the provider administrator can see all system resources, all system activity, logs and tenants, but cannot see any tenant content.

Tenant administrators can only see and interact with dedicated tenant resources such as their CAS server and can also manage all tenant content. They cannot, however, see system resources, other tenants, or logs.

Therefore, coordinating management of a complete multi-tenant deployment will require multiple administration personas, careful design of operating system group membership to protect and maintain sufficient access to files and processes, and possibly multiple logins to accomplish administrative tasks.

Now what?

I have pointed out a handful of key concepts that differ between the usual single-tenant deployments and what you can expect with a multi-tenant deployment of SAS Viya. I am obviously just scratching the surface on these topics. Here are a couple of other resources to check out if you want to dig in further.

Documentation: Multi-tenancy: Concepts
Article: Get ready! SAS Viya 3.4 highlights for the Technical Architect

5 things to know about multi-tenancy was published on SAS Users.

6月 062018
 

Log Management in SAS ViyaLogs. They can be an administrator’s best friend or a thorn in their side. Who among us hasn’t seen a system choked to death by logs overtaking a filesystem? Thankfully, the chances of that happening with your SAS Viya 3.3 deployment is greatly reduced due to the automatic management of log files by the SAS Operations Infrastructure, which archives log files every day.

With a default installation of SAS Viya 3.3, log files older than three days will be automatically zipped up, deleted, and stored in /opt/sas/viya/config/var/log. This process is managed by the sas-ops-agent process on each machine in your deployment. According to SAS R&D, this results in up to a 95% compression rate on the overall log file requirements.

The task that archives log files is managed by the sas-ops-agent on each machine.  Running ./sas-ops tasks shows that the LogFileArchive process runs daily at 0400:

 {
 "version": 1,
 "taskName": "LogfileArchive",
 "description": "Archive daily",
 "hostType": "linux",
 "runType": "time",
 "frequency": "0000-01-01T04:00:00-05:00",
 "maxRunTime": "2h0m0s",
 "timeOutAction": "restart",
 "errorAction": "cancel",
 "command": "sas-archive",
 "commandType": "sas",
 "publisherType": "none"
 },

While the logs are zipped to reduce size, the zip files are stored locally so additional maintenance may be required to move the zipped files to another location. For example, my test system illustrates that the zip files are retained once created.

[sas@intviya01 log]$ ll
total 928
drwxr-xr-x. 3 sas sas 20 Jan 24 14:19 alert-track
drwxrwxr-x. 3 sas sas 20 Jan 24 16:13 all-services
drwxr-xr-x. 3 sas sas 20 Jan 24 14:23 appregistry
...
drwxr-xr-x. 4 sas sas 31 Jan 24 14:20 evmsvrops
drwxr-xr-x. 3 sas sas 20 Jan 24 14:27 home
drwxr-xr-x. 3 root sas 20 Jan 24 14:18 httpproxy
drwxr-xr-x. 3 sas sas 20 Jan 24 14:35 importvaspk
-rw-r--r--. 1 sas sas 22 Jan 26 04:00 log-20180123090001Z.zip
-rw-r--r--. 1 sas sas 22 Jan 27 04:00 log-20180124090000Z.zip
-rw-r--r--. 1 sas sas 22 Jan 28 04:00 log-20180125090000Z.zip
-rw-r--r--. 1 sas sas 10036 Jan 29 04:00 log-20180126090000Z.zip
-rw-r--r--. 1 sas sas 366303 Mar 6 04:00 log-20180303090000Z.zip
-rw-r--r--. 1 sas sas 432464 Apr 3 04:00 log-20180331080000Z.zip
-rw-r--r--. 1 sas sas 22 Apr 4 04:00 log-20180401080000Z.zip
-rw-r--r--. 1 sas sas 22 Apr 5 04:00 log-20180402080000Z.zip
-rw-r--r--. 1 sas sas 15333 Apr 6 04:00 log-20180403080000Z.zip
-rw-r--r--. 1 sas sas 21173 Apr 7 04:00 log-20180404080000Z.zip
-rw-r--r--. 1 sas sas 22191 Apr 8 04:00 log-20180405080000Z.zip
-rw-r--r--. 1 sas sas 21185 Apr 9 04:00 log-20180406080000Z.zip
-rw-r--r--. 1 sas sas 21405 Apr 10 04:00 log-20180407080000Z.zip
drwxr-xr-x. 3 sas sas 20 Jan 24 14:33 monitoring

If three days is too short of a time to retain logs, you can adjust the default timeframe for archiving logs by modifying the default task list for the sas-ops-agent on each machine.

Edit the tasks.json file to suit your needs and then issue a command to modify the task template for sas-ops-agent processes. For example, this will modify the log archive process to retain seven days of information:

...  
{
 "version": 1,
 "taskName": "LogfileArchive",
 "description": "Archive daily",
 "hostType": "linux",
 "runType": "time",
 "frequency": "0000-01-01T04:00:00-05:00",
 "maxRunTime": "2h0m0s",
 "timeOutAction": "restart",
 "errorAction": "cancel",
 "command": "sas-archive -age 7",
 "commandType": "sas",
 "publisherType": "none"
 },
...
 
$ ./sas-ops-agent import -tasks tasks.jason

Restart the sas-ops-agent processes on each of your machines and you will be good to go.

I hope you found this post helpful.

Additional Resources for Administrators

SAS Administrators Home Page
How-to Videos for Administrators
SAS Administration Community
SAS Administrators Blogs
SAS Administrator Training
SAS Administrator Certification

Log Management in SAS Viya 3.3 was published on SAS Users.

12月 212016
 

The report-ready SAS Environment Manager Data Mart has been an invaluable addition to SAS 9.4 for SAS administrators. The data mart tables are created and maintained by the SAS Environment Manager Service Architecture Framework and provide a source of data for out-of-the box reports as well as custom reports that any SAS administrator can easily create. As you can imagine, the size of the tables in the data mart can grow quite large over time so balancing the desired time span of reporting and the size of the tables on disk requires some thought. The good news: SAS 9.4 M4 has made that job even easier.

The Environment Manager Data Mart (EVDM) has always provided a configuration setting to determine how many days of resource records to keep in the data mart tables. You can see below that in a fresh SAS 9.4 M4 installation, the default setting for “Number of Days of Resource Records in Data Mart” is set to 60 days. This means that EVDM data records older than 60 days are deleted from tables whenever the data mart ETL process executes.

EV Data Mart Tables in 9.4M4

The space required to house the Environment Manager Data Mart is split across three primary areas.

  • The ACM library tables contain system level information
  • The APM library tables contain audit and performance data culled from SAS logs
  • The KITS library tables contains miscellaneous tables created by data mart kits that collect specialty information about HTTP access, SAS data set access, and such.

Prior to SAS 9.4M4, the ACM and APM libraries duly archived data according to the “Number of Days of Resource Records in Data Mart” setting, but the KITS library did not. For most of the KITS tables this is not such a big deal but for some deployments, the HTTPACCESS table in the KITS library can grow quite large. For administrators who have enabled the VA feed for the Service Architecture Framework, the size of the HTTPACCESS table directly impacts the time it takes to autoload the results of each refresh of the data mart, as well as the amount of memory consumed by the LASR Server used for the Environment Manager Data Mart LASR library.

So what is the big change for SAS 9.4 M4?

The KITS library now respects the “Number of Days of Resource Records in Data Mart” setting and removes data older than the threshold.  If you are a SAS administrator, you can now forget about having to separately manage the KITS library which should simplify space management.

SAS administrators may need to adjust the “Number of Days of Resource Records in Data Mart” setting to strike a balance between the date range requirements for reporting and the amount of disk space they have available for storing the EVDM tables.  With SAS 9.4 M4, however, administrators can rest assured that all EVDM tables will self-manage according to their wishes.

More on the Service Architecture Framework.

tags: SAS 9.4, SAS Administrators, SAS Environment Manager, SAS Professional Services

Easier Space Management for EV Data Mart Tables in 9.4M4 was published on SAS Users.

8月 112016
 

Locating Modified ContentAs a SAS administrator, you may face situations where you need to locate content that has been modified in the system since a given point in time. If the system has many content developers, discovering all of the folders, libraries, tables, reports, explorations, and similar items that have been modified can be a daunting task. Fortunately, the ExportPackage command from the SAS Platform Object Framework provides just what we need for tackling this sort of question.

Having a command line utility that combs metadata for objects modified or created based on a date criteria can be helpful in many situations.  For example:

  • The administrator oversees a system where full backups are not scheduled to run every day. However, to minimize lost work in the event of a system failure, the administrator would like to create a package of all content that was modified each day so that it could be restored if necessary.
  • A content developer contacts the administrator saying that she would like to create a package of everything she has modified since yesterday but cannot recall everything that she has done.
  • It is Thursday and a problem has arisen that requires reverting the system to its state as of Sunday night which was the last backup. Before reverting the system from the backup, the administrator would like to create a package of any content modified since Monday morning so that it can be restored after the reversion.

The ExportPackage utility can help in all of these situations. To see how, I am going to run the command on my system to locate all of the content I have created today in the /Enterprise folder tree, which is where I store content for testing.

Locating Modified Content

As you can see, the ExportPackage command takes several arguments that allow me to control the scope of metadata that is searched for content and to filter that content on a date specification. The key arguments in my command are:

  • -profile “SASAdmin” – I am using the SASAdmin metadata connection profile to identify my metadata server and to provide a user and password rather than specifying each value individually on the command line.
  • -since “Today” – This is the magic. I just want today’s changes so the keyword “Today” is perfect. It also makes scripting this command much easier as I do not have to know the current date. Of course, if I needed to specify an exact date I can certainly do that here as well, or I can use any of the other keywords such as “Yesterday,” “Week to date,” “Month to date” and “Year to date.”  There is also a -before argument if we ever need to discover objects created or modified before a certain date.
  • -includeDep – This adds dependent objects into the package. For example, if I created a new report which is dependent on a specific table, both objects will be added.
  • -package “C:TempnewObjects.spk” – Names the package that will be created with my modified objects. This argument is required.
  • -objects “/Enterprise” – Specifies the folder where I want to start the search. This argument can also filter for specific types of objects if needed.
  • -noexecute – Lists the objects to export but does not create the package. I used this so we could see what the package would contain.

Continue reading »

4月 042016
 

VisualAnalyticsHTML5In the past, configuring a new autoload library for SAS Visual Analytics was a manual process involving steps to create very specific folder structures and to copy and edit scripts and SAS programs. No more! Updates to the SAS Deployment Manager have added a new task that creates and configures new autoload libraries automatically, which should help SAS administrators add autoload libraries to their system while reducing the possibility of making a careless mistake.

For those unfamiliar with autoload, I am talking about a directory into which a user can drop a spreadsheet, delimited file, or SAS Data Set and have that file automatically loaded for use in a SAS Visual Analytics environment.

So let’s see how this works. For a distributed environment, we need to make sure we start the SAS Deployment Manager on the compute tier machine.  Looking through the tasks, you should see “Configure Autoload Directory for SAS Visual Analytics.”

SAS Visual Analytics autoload configuration made easy

After connecting to the SAS Metadata Server with administrative credentials, you will be prompted with a SAS LASR Artifact selection. The first two selections enable you to define entirely new autoload locations serviced by brand-new LASR servers while the last selection enables you to add an autoload library to an existing LASR server. I am going to work under the assumption that I am creating a brand new Non-Distributed LASR server that will surface my brand new autoload library so I will select the first option.

SAS Visual Analytics autoload configuration made easy2

In subsequent dialogs I can provide the parameters that define a new LASR server, a new LASR library, and an autoload location where users will deposit data to be loaded.

SAS Visual Analytics autoload configuration made easy5

The SAS Deployment Manager will enter configuration processing, after which we should see confirmation that our autoload configuration was successful.

SAS Visual Analytics autoload configuration made easy3

The summary document generated by the SAS Deployment Manager indicates that we have created a new SAS LASR server and SAS LASR Library, a new drop-zone with the requisite folder structure, and a new set of scripts with which to manage the autoload library.

SAS Visual Analytics autoload configuration made easy4

All that is left for the SAS administrator to do is to execute schedule.bat for the new autoload library to initiate the job that will check the autoload drop-zone for new data.  By default that will occur every 15 minutes, but administrators can edit schedule.bat before executing it to adjust that setting.

tags: SAS Administrators, SAS Deployment Manager, SAS Professional Services, SAS Visual Analytics

SAS Visual Analytics autoload configuration made easy was published on SAS Users.

9月 042015
 

As Gerry Nelson pointed out in an earlier post on 9.4M3, a new interface to the SAS Backup and Recovery Tool is available from the Administration application in SAS Environment Manager.  This new SAS Backup Manager interface makes scheduling regular backups and executing ad hoc backups extremely easy to do and greatly simplifies the life of a SAS administrator.  The ease with which new backups can be made should also encourage closer adherence to best practices suggesting backups before and after significant system modifications.

The SAS Backup Manager interface includes a History view of the backups that have been run and displays details of a selected backup.  In this case, I selected the oldest backup in my history which is one I took after my initial 9.4M3 install in July.

Backups

So what happens when you want to delete an old backup?  It turns out that there really isn’t a safe way to selectively remove an individual backup.  Instead, configuration of the backup policy enables an administrator to specify a retention period for backups and as part of running a new backup, any previous backups older than the retention period are purged from the system.

The default retention period is 30 days which, when coupled with the default scheduled weekly backup, maintains at least four historical backups. This ratio seems fair and reasonable for most sites in terms of disk management and flexibility for recoveries. For my system, I have scheduled daily backups so I reduced the retention period to 15 days which provides me with roughly a rolling two-week window of backups and fits within the disk budget I have for my archived backups. If you change the frequency of the scheduled backups you should also consider modifying the retention period to help balance the space the archived backups will require with the required range of dates for recovery. Scheduling daily backups and leaving the retention period at 30 will provide a recovery window of 30 days but will require space to store 30 sets of backups. Similarly, scheduling daily backups and shrinking the retention period reduces the window of time from which a recovery is possible but also reduces the disk space required to store the archived backups.

Backups2

What confused me for a bit was that I never saw any change in the status of my old, purged backups in the backup history display.  Turns out that the help documentation for the SAS Backup Manager clearly states that the history view “… includes all backups or recoveries that are recorded in backup history, including backups that have been purged due to the retention policy.”

Aside from examining the Vault location on disk, the best way to determine the current state of a specific backup is to run the sas-display-backup command from the <SASHome>/SASPlatformObjectFramework/9.4/tools/admin directory, passing in the backupid of the entry of interest.

Backups3

As you can see from the example above, the post-install backup I ran back in July was purged once its date exceeded the 15 day retention period I specified.

A few of things to keep in mind about purging backups:

  • Configure your backup retention policy carefully, keeping in mind that it will determine how long each backup will be available to use as a recovery point and also the amount of disk space required to store your archived backups.
  • Any, and all, backups older than your backup retention policy value will be purged.  As far as I know, there is no way to tag a backup to prevent it from being purged.
  • The History view does not indicate the current state of a particular backup – only the execution status of the backup operation when it ran for that particular instance.
  • The sas-display-backup command reports the current state of an earlier backup.  If purged is true, it’s gone!
tags: Backup and Recovery Tool, SAS Administrators, SAS Professional Services

Deleting old backups with SAS Backup Manager was published on SAS Users.

2月 182015
 

Many larger SAS deployments have multiple instances of similar SAS-related servers. For example, a distributed SAS Enterprise BI environment may have several machines running instances of the object spawner or the OLAP server. Similarly, all of your distributed SAS Visual Analytics deployments have worker nodes that are typically dedicated Linux machines that serve only the needs of SAS Visual Analytics users. As a SAS administrator, it is often useful to understand metrics across a collection of these similar resources to keep tabs on the performance of the system as a whole. Fortunately, SAS Environment Manager provides compatible groups as a way to summarize metrics across a collection of similar resources.

To illustrate the usefulness of this feature, let’s suppose that our organization has a distributed SAS Visual Analytics environment with three worker nodes that host all of our in-memory data. Maybe the CEO relies this data to run the company, so we want to keep an eye on these machines as a unit (and make sure the CEO stays happy!). We could, and should, of course, monitor each machine individually for thoroughness, but it is also useful to visualize trends across a collection of similar resources to help spot potential problem areas. Additionally, it saves me time if I can check up on resources in groups of instead of having to dig into each one individually.

Finding a list of compatible groups. So, let’s start by opening SAS Environment Manager. On the Resources page, we can see that there are several predefined compatible groups in our inventory.

EVcompatible1

Defining a new group. We then select New Group from the Tools Menu and give our group a name, description, and identify what type of resource we intend to group together. In this case, we are going to group three Linux machines that serve as our SAS Visual Analytics nodes.

EVcompatible2

Selecting specific platforms. The final step is to select the specific Linux platforms from our available inventory. In this case, I have chosen to group these three machines together because I expect the workload for each machine to be relatively uniform.

EVcompatible3

Monitoring group performance. That’s it. Now we can select our compatible group named VA Nodes and view metrics and performance over time for these three machines as a unit. Because every member of a compatible group is uniform, the metrics collected across the group can be aggregated for reporting purposes. For example, here is a look at the file system read and write operations and the amount of free memory across our three SAS Visual Analytics nodes:

EVcompatible4

Monitoring individual machines. Examining metrics across the three machines in our group is easy to do as well. Just select one of the metric charts from the Monitoring view, and you can compare the performance of each individual machines across time. In this case, we expect all three machines to display similar performance characteristics and that is confirmed in this graph.

EVcompatible5

So there you go. Compatible groups can come in handy to investigate and report on the performance of a set of similar resources.

Happy monitoring.

tags: monitoring jobs and processes, SAS Administrators, SAS Environment Manager, SAS Professional Services, SAS Visual Analytics
12月 172014
 

As SAS administrators, I know you are as excited as I am by the ability of SAS Environment Manager to monitor, in detail, the performance of their SAS environments. Now, we have a robust tool to monitor, measure and report on the performance of the various SAS components. An added bonus—with each maintenance release of SAS 9.4, more features are added to SAS Environment Manager tool set.

As nice as SAS Environment Manager is, some of you may already be invested in other system monitoring tools. As a result, SAS Professional Services consultants are often asked how to integrate monitoring information from SAS Environment Manager into their existing monitoring systems, especially when it comes to notifying administrators of potential problems. Fortunately, SAS Environment Manager 2.4 now includes an event exporting service that makes it quite simple to integrate with most any third-party monitoring tool.

What is an event?

Before I explain the how to export events, it might be good to explain exactly what an event is in this context. SAS Environment Manager monitors metrics, log files, configuration changes and availability across a wide inventory of resources. Generally speaking, events are the important information system admins need to know about quickly as they often communicate notice of warnings, errors, or potential problems that may require attention.

When there is a change in a resource’s state or a change in a resource’s threshold value for one of the monitored items, an event is generated. For example, an event can be emitted if one of the core SAS servers starts, stops or restarts (e.g., an OLAP, Object Spawner or SAS Web App Server).

Additionally, alerts are considered events. So, if the administrator defines an alert to trigger when memory consumption on a machine rises above 95%, the alert that gets generated is considered an event. So the question is, how can we communicate the events originating from SAS Environment Manager to the customer’s existing monitoring tools?

How to communicate an event to a third-party tool

We do this by defining a new platform service to export SAS Environment Manager events to a text file that can be monitored by any third-party monitoring tool. Here are the key steps:

  1. For a selected platform, select New Platform Service from the Tools Menu.
    EVexport1
  2. Give the new platform service a name and assign a Service Type of “SAS Event Exporter” as shown below.
    EVexport2
  3. Then just provide credentials for one of the users in the Super User role and a location for the file that will contain the generated events.
    EVexport3

Events output format

Now, just sit back and let SAS Environment Manager do its thing. Each and every event generated by SAS Environment Manager will now be written to the named file where the third-party monitoring tool can pick it up and incorporate it into its existing notification scheme. The format of an exported event is: dateTtimeOffset | msglevel | source | message1 | <message2>

Here’s a sample of a few events and their format:
EVexport4
The best part of this feature is that it doesn’t matter which third-party monitoring tool the customer uses. Most all of the enterprise-quality monitoring tools can read information from flat text files. All that’s left is to share the event format with your system administrators, and they should be able to collect the events and feed them into their existing systems.

Until next time, happy monitoring!

tags: monitoring jobs and processes, SAS Administrators, SAS Environment Manager, SAS Professional Services
11月 122014
 

Did you know that applying a new SAS license file for many SAS solutions is a two step process?

Because many of SAS’ most of popular solutions (including SAS® Visual Analytics, SAS® Enterprise MinerTM and SAS® Customer Intelligence) depend on the middle-tier architecture for primary user access, information about licensed products and expiration information is maintained in metadata.  So to renew licensing properly for another year, SAS administrators need to update the SID file information in metadata in addition to updating the licensing information stored in the CORE catalog.

As SAS administrator, when you receive a new SID file, you must first apply the new license using either the Renew SAS Software utility on Windows or by running the sassetup script on UNIX systems.  This step updates the CORE catalog with licensed products and expiration dates, and it must be performed on every machine that has a Base SAS installation.

For sites that license SAS solutions, the SAS administrator then needs to run the SAS Deployment Manager and select the “Update SID File in Metadata” option.  This can be done from any machine in the deployment and only needs to be performed once, regardless of the number of machines involved.

renewsolutions

You’ll be prompted for metadata connection credentials, the configuration directory, and the fully qualified path to the new SID file.  The SAS Deployment Manager takes care of the rest and the new licensing information will be updated in metadata.  All that’s left to do is to restart all of the SAS processes to pick up the new license.

For more information

Technical Support maintains a list of all SAS solutions that require this two step process in Usage Note 49750.

Similarly, there are detailed instructions for renewing all types of SAS installations available from support.sas.com.  You can search by Base SAS release, product and product release.

tags: deployment, SAS Administrators, SAS Professional Services, software license