5月 302010
 
Contributed by Dr. Robert Allison, Research & Development, SAS
Memorial Day is a U.S. federal holiday to commemorate the U.S. men and women who died while in military service.




My SAS graphs are often about numbers and trends — but for a topic like this, I thought it important to put individual names with the numbers. Therefore, this time, I created a graph that enables you to click on the data points and see a list of the names of the U.S. soldiers who died during that month. View the graph. (Note: The graph will open in another window when you click on the link.)

Technical Details


Each dot in the graph has mouse-over text and drilldown, which were created using the html= option in the GPLOT procedure. The html= option lets you associate html tags (such as title or alt for the mouse-over text, and href for the drill-downs) with each marker in the plot.

The links for the Vietnam War deaths display HTML pages, which I created for each month.

The links for the Iraq War deaths link to a Website that lists all the deaths for that month,
so I link to that site.

I have recently enhanced this graph by converting it from dev=gif to dev=png, and running it with a new version of still-under-development SAS/GRAPH software. Now the lines are much smoother than in a graph created with SAS 9.2. (This line-smoothing is called anti-aliasing and is a feature under development.)
5月 292010
 
%macro get_pdo(data = , score = , y = , wt = NONE, ref_score = , target_odds = , target_pdo = );
**********************************************************************;
* THIS MACRO IS TO CALCULATE OBSERVED ODDS AND PDO FOR ANY SCORECARD *;
* AND COMPUTE ALIGNMENT BETAS TO REACH TARGET ODDS AND PDO           *;
* ------------------------------------------------------------------ *;
* PARAMETERS:                                                        *;
*  DATA       : INPUT DATASET                                        *;
*  SCORE      : SCORECARD VARIABLE                                   *;
*  Y          : RESPONSE VARIABLE IN (0, 1)                          *;
*  WT         : WEIGHT VARIABLE IN POSITIVE INTEGER                  *;
*  REF_SCORE  : REFERENCE SCORE POINT FOR TARGET ODDS AND PDO        *;
*  TARGET_ODDS: TARGET ODDS AT REFERENCE SCORE OF SCORECARD          *;
*  TARGET_PDO : TARGET POINTS TO DOUBLE ODDS OF SCARECARD            *; 
* ------------------------------------------------------------------ *;
* OUTPUTS:                                                           *;
*  REPORT : PDO REPORT WITH THE CALIBRATION FORMULA IN HTML FORMAT   *;
* ------------------------------------------------------------------ *;
* AUTHOR: WENSLIU@PAYPAL.COM                                         *;
**********************************************************************;

options nonumber nodate orientation = landscape nocenter;  

*** CHECK IF THERE IS WEIGHT VARIABLE ***;
%if %upcase(&wt) = NONE %then %do;
  data _tmp1 (keep = &y &score _wt);
    set &data;
    where &y in (1, 0)  and
          &score ~= .;
    _wt = 1;
    &score = round(&score., 1);
  run;
%end;
%else %do;
  data _tmp1 (keep = &y &score _wt);
    set &data;
    where &y in (1, 0)        and
          &score ~= .         and
          round(&wt., 1) > 0;
    _wt = round(&wt., 1);
    &score = round(&score., 1);
  run;
%end;

proc logistic data = _tmp1 desc outest = _est1 noprint;
  model &y = &score;
  freq _wt;
run;

proc sql noprint;
  select round(min(&score), 0.01) into :min_score from _tmp1;

  select round(max(&score), 0.01) into :max_score from _tmp1;
quit;

data _est2;
  set _est1 (keep = intercept &score rename = (&score = slope));

  adjust_beta0 = &ref_score - (&target_pdo * log(&target_odds) / log(2)) - intercept * &target_pdo / log(2);
  adjust_beta1 = -1 * (&target_pdo * slope / log(2));

  do i = -5 to 5;
    old_pdo = round(-log(2) / slope, 0.01);
    old_ref = &ref_score + (i) * old_pdo;
    old_odd = exp(-(slope * old_ref + intercept));
    if old_ref >= &min_score and old_ref <= &max_score then output;
  end;
run;

data _tmp2;
  set _tmp1;
  
  if _n_ = 1 then do;
    set _est2(obs = 1);
  end;

  adjusted = adjust_beta0 + adjust_beta1 * &score;
run;

proc logistic data = _tmp2 desc noprint outest = _est3;
  model &y = adjusted;
  freq _wt;
run;

data _est4;
  set _est3 (keep = intercept adjusted rename = (adjusted = slope));

  adjust_beta0 = &ref_score - (&target_pdo * log(&target_odds) / log(2)) - intercept * &target_pdo / log(2);
  adjust_beta1 = -1 * (&target_pdo * slope / log(2));

  do i = -5 to 5;
    new_pdo = round(-log(2) / slope, 0.01);
    new_ref = &ref_score + (i) * new_pdo;
    new_odd = exp(-(slope * new_ref + intercept));
    if new_ref >= &min_score and new_ref <= &max_score then output;
  end;
run;
 
proc sql noprint;
create table
  _final as
select
  &target_pdo            as target_pdo,
  &target_odds           as target_odds,
  a.old_pdo              as pdo1,
  a.old_ref              as ref1,
  a.old_odd              as odd1,
  log(a.old_odd)         as ln_odd1,
  a.adjust_beta0         as adjust_beta0,
  a.adjust_beta1         as adjust_beta1,
  b.new_pdo              as pdo2,
  b.new_ref              as ref2,
  b.new_odd              as odd2,
  log(b.new_odd)         as ln_odd2
from
  _est2 as a inner join _est4 as b
on
  a.i = b.i;

select round(pdo1, 1) into :pdo1 from _final;

select put(max(pdo1 / pdo2 - 1, 0), percent10.2) into :compare from _final;

select case when pdo1 > pdo2 then 1 else 0 end into :flag from _final;

select put(adjust_beta0, 12.8) into :beta0 from _final;

select put(adjust_beta1, 12.8) into :beta1 from _final;
quit;

%put &compare;
ods html file = "%upcase(%trim(&score))_PDO_SUMMARY.html" style = sasweb;
title;
proc report data  = _final box spacing = 1 split = "/" 
  style(header) = [font_face = "courier new"] style(column) = [font_face = "courier new"]
  style(lines) = [font_face = "courier new" font_size = 2] style(report) = [font_face = "courier new"];

  column("/SUMMARY OF POINTS TO DOUBLE ODDS FOR %upcase(&score) WEIGHTED BY %upcase(&wt) IN DATA %upcase(&data)
          /( TARGET PDO = &target_pdo, TARGET ODDS = &target_odds AT REFERENCE SCORE &ref_score ) / "
         pdo1 ref1 odd1 ln_odd1 pdo2 ref2 odd2 ln_odd2);

  define pdo1    / "OBSERVED/SCORE PDO"   width = 10 format = 4.   center;
  define ref1    / "OBSERVED/REF. SCORE"  width = 15 format = 5.   center order order = data;
  define odd1    / "OBSERVED/ODDS"        width = 15 format = 14.4 center;
  define ln_odd1 / "OBSERVED/LOG ODDS"    width = 15 format = 8.2  center;
  define pdo2    / "ADJUSTED/SCORE PDO"   width = 10 format = 4.   center;
  define ref2    / "ADJUSTED/REF. SCORE"  width = 15 format = 5.   center;
  define odd2    / "ADJUSTED/ODDS"        width = 15 format = 14.4 center;  
  define ln_odd2 / "ADJUSTED/LOG ODDS"    width = 15 format = 8.2  center;

  compute after;
  %if &flag = 1 %then %do;
    line @15 "THE SCORE ODDS IS DETERIORATED BY %trim(&compare).";
    line @15 "CALIBRATION FORMULA: ADJUSTED SCORE = %trim(&beta0) + %trim(&beta1) * %trim(%upcase(&score)).";
  %end;
  %else %do;
    line @25 "THERE IS NO DETERIORATION IN THE SCORE ODDS.";
  %end;
  endcomp;
run;;
ods html close;

*************************************************;
*              END OF THE MACRO                 *;
*************************************************; 
%mend get_pdo;
 Posted by at 11:43 下午  Tagged with:
5月 292010
 
%macro separate(data = , score = , y = , wt = NONE);
***************************************************************;
* THIS MACRO IS TO EVALUATE THE PREDICTABILITY OF A SCORECARD *; 
* ----------------------------------------------------------- *;
* PARAMETERS:                                                 *;
*  DATA  : INPUT DATASET                                      *;
*  SCORE : SCORECARD VARIABLE                                 *;
*  Y     : RESPONSE VARIABLE IN (0, 1)                        *;
*  WT    : WEIGHT VARIABLE IN POSITIVE INTEGER                *;
* ----------------------------------------------------------- *;
* OUTPUTS:                                                    *;
*  &AUC        : MACRO VARIABLE FOR AUC STATISTICS            *;
*  &GINI       : MACRO VARIABLE FOR GINI COEFFICIENT          *;
*  &KS         : MACRO VARIABLE FOR KS STATISTICS             *;
*  &KS_SCORE   : MACRO VARIABLE FOR SCORE VALUE MAXIMIZING KS *; 
*  &DIVERGENCE : MACRO VARIABLE FOR DIVERGENCE                *; 
*  REPORT      : GOOD-BAD SEPARATION REPORT IN HTML FORMAT    *;
* ----------------------------------------------------------- *;
* AUTHOR: WENSLIU@PAYPAL.COM                                  *;
***************************************************************;

options nonumber nodate orientation = landscape linesize = 160 nocenter;  

*** DEFAULT GROUP NUMBER FOR REPORT ***;
%let grp = 10;

*** CHECK IF THERE IS WEIGHT VARIABLE ***;
%if %upcase(&wt) = NONE %then %do;
  data _tmp1 (keep = &y &score _wt);
    set &data;
    where &y in (1, 0)  and
          &score ~= .;
    _wt = 1;
    &score = round(&score., 1);
  run;
%end;
%else %do;
  data _tmp1 (keep = &y &score _wt);
    set &data;
    where &y in (1, 0)        and
          &score ~= .         and
              round(&wt., 1) > 0;
    _wt = round(&wt., 1);
    &score = round(&score., 1);
  run;
 
%end;

filename lst_out temp;

proc printto new print = lst_out;
run;

*** CONDUCT NON-PARAMETRIC TESTS ***; 
ods output wilcoxonscores = _wx;
ods output kolsmir2stats = _ks;
proc npar1way wilcoxon edf data = _tmp1;
  class &y;
  var &score;
  freq _wt;
run;

proc printto;
run;

proc sort data = _wx;
  by class;
run;

*** CALCULATE ROC AND GINI ***;
data _null_;
  set _wx end = eof;
  by class;

  array a{2, 3} _temporary_;
  if _n_ = 1 then do;
    a[1, 1] = n;
    a[1, 2] = sumofscores;
    a[1, 3] = expectedsum;
  end;
  else do;
    a[2, 1] = n;
  end;
  if eof then do;
    auc  = (a[1, 2] - a[1, 3]) / (a[1, 1] * a[2, 1])  + 0.5;
    gini = 2 * (auc - 0.5);  
    call execute('%global auc; %let auc = '||put(auc, 10.8)||';');
    call execute('%global gini; %let gini = '||put(gini, 10.8)||';');
  end;
run;

*** CALCULATE KS ***;
data _null_;
  set _ks;

  if _n_ = 1 then do;
    ks = nvalue2 * 100;
    call execute('%global ks; %let ks = '||put(ks, 10.8)||';');
  end;
run;

*** CAPTURE SCORE POINT FOR MAX KS ***;
data _null_;
  infile lst_out;
  input @" at Maximum = " ks_score;
  output;
  call execute('%global ks_score; %let ks_score = '||put(ks_score, 10.0)||';');
  stop;
run;

proc summary data = _tmp1 nway;
  class &y;
  freq _wt;
  output out = _data_ (drop = _type_ _freq_)
  mean(&score.) = mean var(&score.) = variance;
run;

*** CALCULATE DIVERGENCE ***;
data _null_;
  set _last_ end = eof;
  array a{2, 2} _temporary_;
  if _n_ = 1 then do;
    a[1, 1] = mean;
    a[1, 2] = variance;
  end;
  else do;
    a[2, 1] = mean;
    a[2, 2] = variance;
  end;
  if eof then do;
    divergence = (a[1, 1] - a[2, 1]) ** 2 / ((a[1, 2] + a[2, 2]) / 2);
    call execute('%global divergence; %let divergence = '||put(divergence, 10.8)||';');
  end;
run;

*** CAPTURE THE DIRECTION OF SCORE ***;
ods listing close;
ods output spearmancorr = _cor;
proc corr data = _tmp1 spearman;
  var &y;
  with &score;
run;
ods listing;
 
data _null_;
  set _cor;
  if &y >= 0 then do;
    call symput('desc', 'descending');
  end;
  else do;
    call symput('desc', ' ');
  end;
run;
%put &desc;

*** DEFINE DECILES WITH WEIGHT VARIABLE ***;
proc univariate data = _tmp1 noprint;
  freq _wt;
  var &score;
  output out = _tmp2 pctlpre = decile_ pctlpts = 0 to 100 by %sysevalf(100 / &grp.);
run;

data _tmp3 (keep = rank _wt &y &score _dwt);
  set _tmp1;

  if _n_ = 1 then do;
    set _tmp2;
  end;

  array d[*] decile_:;
  rank = 999;
  do i = 2 to dim(d);
    if &score <= d[i] and rank = 999 then do;
      rank = i - 1;
    end;
  end;  
  
  _dwt = 1 / _wt;
run;

proc summary data = _last_ nway;
  class rank;
  freq _wt;
  output out = _data_ (drop = _type_ rename = (_freq_ = freq))
  min(&score) = min_score max(&score) = max_score
  sum(&y) = bads sum(_dwt) = raw;
run;

proc sql noprint;
  select sum(bads) into :bcnt from _last_;
  select sum(freq) - sum(bads) into :gcnt from _last_;
quit;

proc sort data = _last_ (drop = rank);
  by &desc min_score;
run;

data _data_;
  set _last_;
  by &desc min_score;

  i + 1;
  percent = i * %sysevalf(100 / &grp) / 100;
  good  = freq - bads;
  odds  = good / bads;

  hit_rate = bads / freq;
  retain cum_bads cum_freq;
  cum_bads + bads;
  cum_freq + freq;
  cum_hit_rate = cum_bads / cum_freq;

  cat_rate = bads / &bcnt;
  retain cum_cat_rate;
  cum_cat_rate + cat_rate;
run;

*** GENERATE GAIN CHART ***;
ods html file = "%upcase(%trim(&score))_SEPARATION_REPORT.html" style = sasweb;
title;
proc report data = _last_ box spacing = 1 split = "/"
  style(header) = [font_face = "courier new"] style(column) = [font_face = "courier new"]
  style(lines) = [font_face = "courier new"] style(report) = [font_face = "courier new"]
  style(summary) = [font_face = "courier new" font_style = roman];

  column("/GOOD BAD SEPARATION REPORT FOR %upcase(%trim(&score)) WEIGHTED BY %upcase(&wt) IN DATA %upcase(%trim(&data))
          / MAXIMUM KS = %trim(&ks) AT SCORE POINT %trim(&ks_score) /  
          / ( AUC STATISTICS = %trim(&auc), GINI COEFFICIENT = %trim(&gini), DIVERGENCE = %trim(&divergence) )/ /"
         percent min_score max_score raw good bads freq odds hit_rate cum_hit_rate cat_rate cum_cat_rate);

  define percent      / " % " center            width = 6  format = percent6.0 order order = data;
  define min_score    / "MIN/SCORE"             width = 6  format = 5.         analysis min center;
  define max_score    / "MAX/SCORE"             width = 6  format = 5.         analysis max center;
  define raw          / "RAW/UNITS"             width = 10 format = comma9.    analysis sum center;
  define good         / "GOOD/#"                width = 15 format = comma14.   analysis sum;
  define bads         / "BAD/#"                 width = 15 format = comma14.   analysis sum;
  define freq         / "TOTAL/#"               width = 15 format = comma14.   analysis sum;
  define odds         / "ODDS"                  width = 10 format = 9.         order;
  define hit_rate     / "HIT/RATE"              width = 10 format = percent9.2 order center;
  define cum_hit_rate / "CUMULATIVE/HIT RATE"   width = 10 format = percent9.2 order;
  define cat_rate     / "CATCH/RATE"            width = 10 format = percent9.2 order center;
  define cum_cat_rate / "CUMULATIVE/CATCH RATE" width = 10 format = percent9.2 order;

  rbreak after / summarize style = [font_weight = bold];
run;
ods html close;

*************************************************;
*              END OF THE MACRO                 *;
*************************************************; 
%mend separate;
 Posted by at 11:34 下午  Tagged with:
5月 282010
 
When you travel as much as a typical SAS instructor, you get to be an expert on hotels, restaurants and entertainment. Anyone can figure out where to eat or what to do in downtown Chicago or in Manhattan, but it takes a dedicated traveler to show you where the action is in Bedminster, New Jersey. (Note: there is none).

For those of you attending a SAS course, there is a web page available for each of our training locations in the United States and Canada with information on how to get to the training center, where to stay, and for some locations, entertainment and dining options. (For locations outside the United States and Canada, see your local SAS web site).

Our most popular training location is at our World Headquarters in Cary, North Carolina. If you're scheduled to take a course in Cary, you may have a number of questions, including:

  • Where is Cary?
  • What language is spoken in Cary?
  • Where should I stay?
  • Where should I eat?
  • What should I do after class?
  • How much should I tip the instructor?
  • Do SAS employees really get free M&Ms?

When I'm visiting a training location, I certainly appreciate the information that's available on the Training Center web site, but sometimes I need a little more information, particularly about where to eat and what to do. For those of you coming to Cary for a class, I'm going to devote the next few postings to answering the questions in the list above. So, let's get started:

Continue reading "What Time Do They Roll Up the Sidewalks in Cary?"
5月 282010
 
More and more, executives have questions about social media, and recently they have turned to me for answers.

It’s clear they’re moving along the continuum of interest from ‘I’ll let this fad pass,’ to ‘wow, I best get on board this train before I get left behind.’

Because I grapple with the same issues, people feel comfortable approaching me with what could be seen as “basic” questions. Typically, the conversation starts with a light touch on the shoulder, and a motion to talk in private. These chats have recurring themes:

- I am not a social media native
- I have little idle time
- I need to accelerate the learning curve and get value

Just last week, I met with an executive who really wanted to learn more about social. We sat down to lunch and I immediately launched my browser to jump into a demonstration of the various social networks and tools.

She reached over and tapped my arm to get my attention. She asked a question that reminded me once again why she has been so successful in her career..."Deb, before we get started, give me some context. Why is Social Media so important? Why should I spend time learning and using it?"

My answer? There are 3 reasons:

  1. Nearly all companies now recruit via Social Media
  2. The demographics of the US population will change drastically in next 5-10 years due to the Millenials joining the workforce. Their methods of communication, their trust and openness, and their use of Social Media will overwhelm us. It will happen fast. Boomers were a huge influence due to their overwhelming numbers. This "Echo Boom" brings with it not only numbers (60 million, just shy of the 78 million Boomers) but a vastly different view of communication.
  3. Increasingly, customers make buying decisions based on digital content. It’s imperative that we make our content and service available to them on their terms.

After setting the stage with this context, I walk through these 4 steps, which, by the way, could be used to pique the interest of anyone in your organization:

  1. Google your name. What appears? Is it what you had hoped for? This is your brand!
  2. Create value by networking: LinkedIn profile and groups. Two really great sources are from Fortune and another is a blog by Jason Alba that refers to how Chris Brogan uses LinkedIn
  3. Determine if Twitter is a good fit. I show people how I use TweetDeck as a content aggregator and search tool. Who I follow is becoming less important than the searches I monitor.
  4. Teach people to listen before contributing to the discussion.

For more tips and tricks, I recommend Chris Brogan's newest book, Social Media 101, and a great new video titled ‘Social Media Revolution 2’ that’s filled with interesting, and sometimes shocking, statistics concerning the social environment.

Don't get left behind!
5月 262010
 
I love a good collaborative project.

Recently, I had the good fortune to work with a global team of Business Intelligence experts to create a new SAS certification exam for “BI Content Development”. Our team spanned 4 continents, worked in 6 timezones and represented a cross section of job functions including Technical Education, Technical Support, Consultants and Professional Services.

We shared ideas about the key job skills for the BI Content Developer job role. “How much OLAP skill does the content developer need to have?” and “Does a developer need to know how to build information maps or just how to use them?” were common discussions. Over the course of a few of weeks, the skills that best defined a certified BI Content Developer were compiled, and the final result is shown on the exam content webpage. The recommended training for this exam is the SAS Business Intelligence Fast Track course.

With the exam content defined, we held a workshop in Cary, North Carolina to create the exam questions. We divvied up the objectives and wrote the questions. Then the real collaborative fun began when we entered the review! For each question, we debated its merit, revised its language, and ensured its technical accuracy and its relevance to the job. It was an intense review; we used every minute of every day available to us, taking very short breaks and working long hours. Exhausting, but very rewarding.

In the end, we created an exam ready for the beta process. It was a great experience, seeing people from a broad background of cultural and technical experiences come together to complete a project. Are you interested in taking the beta exam? As of this morning, there are still a few free registrations available for non-SAS employees. More information is on the exam registration page.
5月 252010
 
Currently I'm ...

Reading
Recently finished Analytics at Work: Smarter Decisions, Better Results by Davenport, Harris, and Morison (2010, Harvard Business Press). I've assigned it as companion reading for the students who take the Advanced Business Analytics course I'm writing right now. It is an excellent reference for anyone who wants a roadmap for how to help their organization embrace analytics. The book stops short of explaining statistical analyses, leaving those topics to other writers, but what most good data mining books lack, this book addresses the characteristics of analytical leadership, pitfalls to implementing analytics in a business, and ways to get management buy-in for analytics.

Writing
Advanced Business Analytics 1. This course is a collborative effort by statisticians in the Education division at SAS and professors at some of the top business schools in the US. The course will be given, free of charge, to professors at graduate programs in universities around the world, in an effort to sharpen the analytical teeth of tomorrow's business leaders. The course is largely designed for MBA students, although we are hearing buss from statstics, economics, marketing, and management information systems departments as well. We are preparing a kick-off training for about 35 professors this summer, and will likely hold additional training event in the future, should demand warrant. The course is essentially an introduction to data mining with a heavy emphasis on solving business problems. Students coming out of the course should be prepared to manage analytical staff, to make smart decisions when working with statistical analysts, and move their businesses toward a more analytical model of decision making.

Listening to
I have been on a really weird Gilbert and Sullivan kick lately. Why? I used to listen to all of their operettas as a kid, and now I find myself trying to sing something from Yeomen of the Guard or the Pirates of Penzance to my kids and I have to download the song to get the lyrics right. Well, at least it has cleared the ABBA backlog I had in my brain the last 3 months. (Thank you very much, Mamma Mia DVD, for making my 3-year-old a diva).

Hearing
I'm hearing a lot more about structural equation modeling lately. It is no coincidence that JMP and SAS/STAT development teams are producing a point-and-click interface to perform SEM within JMP, which calls the newly-improved PROC CALIS. It's a really slick interface, if you get a chance to preview it at a conference. The sudden resurgence of interest in SEM is encouraging, because SEM is one of the more interesting modeling techniques around. I'm hearing a lot about SEM in pharmaceuticals, manufacturing, and insurance. If you have an example where your organization is using SEM, shoot me a comment or an email to share how you're using it. I'd love to hear from you!

Considering
John Naisbitt's High Tech High Touch. Chatting with a colleague, Tom Henry, in the breakroom this morning got us thinking about how important it is to have personal contact with customers. Not everyone wants or needs to talk to a sales rep (I am generally in the category of "Don't call me, I'll call you" when it comes to sales reps). With the great advances in data integration, automated analytics, scoring, and marketing automation, there are fewer reasons than ever to have to talk to a real person when purchasing a product or service. But then there are situations that scream for an exception. A customer calls software sales, wanting a training quote for an Enterprise Guide course, covering programming to address stacking and nesting, preparing data for perceptual mapping. No automated system will place this person into the right course. Tom recognizes that this is the time to pick up the phone and talk -- what do you need to do? What are you doing now? What do you need to do differently? At the end of the day, high tech is able to clear the space for Tom to have that critical High Touch with someone who really needed it. And he won't waste time calling someone who would rather just "buy online."

Do you have technlogy working for you to make it easier to identify those critical touch situations?
5月 252010
 
If you are a SAS/GRAPH user, you may have heard about the new Statistical Graphics ("SG") procedures that are new in SAS 9.2. These procedures are designed to make it easier to produce common statistical charts and plots.

You may have assumed that the procedures are only for folks who can explain the difference between a fitted LOESS curve and a penalized B-spline curve (which I can't). In fact, several of the new procedures (SGPLOT, SGPANEL, and SGSCATTER) can be used to produce non-statistical graphs, such as bar charts and line plots. In some cases, they can be used to produce non-statistical charts and plots that can't be created with PROC GCHART or PROC GPLOT. For example, the following program produces a bar chart with transparent, overlayed bars:

proc sgplot data=budget;
  vbar year / response=expenses
   nostatlabel transparency=.5;  
  vbar year / response=sales
   barwidth=.6 nostatlabel
   transparency=.5;  
title "Sales and Expenses";
run;



Continue reading "SAS/GRAPH "SG" Procedures--Not Just for Statisticians!"
5月 252010
 
Contributed by Maura Stokes, Director, Research & Development, SAS
In order to get new analytical functionality to our customers faster than ever, SAS has released new versions of SAS/STAT®, SAS/ETS®, and SAS/OR® software. These products are coupled with the third maintenance release of SAS 9.2 and are available now. If you install the maintenance, the 9.2 versions of these products will be updated automatically.

SAS/STAT 9.22 includes a new procedure for survival analysis of sample survey data and many new features for postfitting analysis for procedures that fit linear models. SAS/ETS 9.22 provides new software for modeling the distribution of the severity of events, and SAS/OR 9.22 includes support for solving large-scale, nonlinear optimization problems as well as a production version of SAS Simulation Studio. These products contain many other enhancements as well.

The documentation for these products is also available in the Documentation area of support.sas.com. To locate it, select Documentation from the KNOWLEDGE BASE navigation. You will notice the new entry for SAS 9.22 Analytical Products (shown in the image to the right). Select SAS 9.22 Analytical Products to access a list of documentation for these updated products.
5月 252010
 
A few weeks ago I posted 'Chris Brogan Talks About Making Social Media Happen in Your Company', the fourth (of six) Chris Brogan videos taped when Chris was at our headquarters in Cary, NC.

In this video, the fifth in this series, SAS' Deb Orton interviews Chris Brogan of New Marketing Labs. Chris, channeling his ‘stunt double’ Katie Paine, discusses how to measure social media effectiveness and how to set objectives for using social media in marketing.

Enjoy the video. And stay tuned. We'll post the final segment from this interview series soon, and we'll continue this series with new interviews from SAS practitioners in the coming months.