Editor's note: This following post is from Shara Evans, CEO of Market Clarity Pty Ltd. Shara is a featured speaker at SAS Global Forum 2017 and a globally acknowledged Keynote Speaker and widely regarded as one of the world’s Top Female Futurists.
Learn more about Shara.
In the movie Minority Report lead character John Anderton, played by Tom Cruise, has an eye transplant in order to avoid being recognized by ubiquitous iris scanning identification systems.
Such surgical procedures still face some fairly significant challenges, in particular connecting the optic nerve of the transplanted eye to that of the recipient. However the concept of pervasive individual identification systems is now very close to reality and although the surgical solution is already available, it’s seriously drastic!
We’re talking face recognition here.
Many facial recognition systems are built on the concept of “cooperative systems,” where you look directly at the camera from a pre-determined distance and you are well lit, and your photo is compared against a verified image stored in a database. This type of system is used extensively for border control and physical security systems.
Where it gets really interesting is with “non-cooperative systems,” which aim to recognize faces in a crowd: in non-optimal lighting situations and from a variety of angles. These systems aim to recognize people who could be wearing spectacles, scarves or hats, and who might be on the move. An Australian company, Imagus Technology has designed a system that is capable of doing just that — recognizing faces in a crowd.
To do this, the facial recognition system compiles a statistical model of a face by looking at low-frequency textures such as bone structure. While some systems may use very high-frequency features such as moles on the skin, eyelashes, wrinkles, or crow’s feet at the edges of the eyes — this requires a very high-quality image. Whereas, with people walking past, there’s motion blur, non-optimal camera angles, etcetera, so in this case using low-frequency information gets very good matches.
Biometrics are also gaining rapid acceptance for both convenience and fraud prevention in payment systems. The two most popular biometric markers are fingerprints and facial recognition, and are generally deployed as part of a two-factor authentication system. For example, MasterCard’s “Selfie Pay” app was launched in Europe in late 2016, and is now being rolled out to other global locations. This application was designed to speed-up and secure online purchases.
Facial recognition is particularly interesting, because while not every mobile phone in the world will be equipped with a fingerprint reader, virtually every device has a camera on it. We’re all suffering from password overload, and biometrics - if properly secured, and rolled out as part of a multi-factor authentication process - can provide a solution to coming up with, and remembering, complex passwords for the many apps and websites that we frequent.
Its not just about recognizing individuals
Facial recognition systems are also being used for marketing and demographics. In a store, for example, you might want to count the number of people looking at your billboard or your display. You'd like to see a breakdown of how many males and females there are, age demographics, time spent in front of the ad, and other relevant parameters.
Can you imagine a digital advertising sign equipped with facial recognition? In Australia, Digital Out-of-Home (DOOH) devices are already being used to choose the right time to display a client’s advertising. To minimize wastage in ad spend, ads are displayed only to a relevant audience demographic; for instance, playing an ad for a family pie only when it sees a mum approaching.
What if you could go beyond recognizing demographics to analyzing people’s emotions? Advances in artificial intelligence are turning this science fiction concept into reality. Robots such as “Pepper” are equipped with specialized emotion recognition software that allows it to adapt to human emotions. Again, in an advertising context, this could prove to be marketing gold.
Of course new technologies is always a double-edged sword, and biometrics and advanced emotion detection certainly fall into this category.
For example, customers typically register for a biometric payment system in order to realize a benefit such as faster or more secure e-commerce checkouts or being fast-tracked through security checks at airports. However, the enterprise collecting and using this data must in turn satisfy the customer that their biometric reference data will be kept and managed securely, and used only for the stated purpose.
The advent of advanced facial recognition technologies provides new mechanisms for retailers and enterprises to identify customers, for example from CCTV cameras as they enter shops or as they view public advertising displays. It is when these activities are performed without the individual’s knowledge or consent that concerns arise.
Perhaps most worrisome is that emotion recognition technology would be impossible to control. For example, anyone would be able to take footage of world leaders fronting the press in apparent agreement after the outcome of major negotiations and perhaps reveal their real emotions!
From a truth perspective, maybe this would be a good thing.
But, imagine that you’re involved in intense business negotiations. In the not too distant future advanced augmented reality glasses or contacts could be used to record and analyze the emotions of everyone in the room in real time. Or, maybe you’re having a heart-to-heart talk with a family member or friend. Is there such a thing as too much information?
Most of the technology for widespread exploitation of face recognition is already in place: pervasive security cameras connected over broadband networks to vast resources of cloud computing power. The only piece missing is the software. Once that becomes reliable and readily available, hiding in plain sight will no longer be an option.
Find out more at the SAS Global User Forum
This is a preview of some of the concepts that Shara will explore in her speech on “Emerging Technologies: New Data Sets to Interpret and Monetize” at the SAS Global User Forum:
- Emerging technologies such as advanced wearables, augmented and virtual reality, and biometrics — all of which will generate massive amounts of data.
- Smart Cities — Bringing infrastructure to life with sensors, IoT connections and robots
- Self Driving Cars + Cars of the Future — Exploring the latest in automotive technologies, robot vision, vehicle sensors, V2V comms + more
- The Drone Revolution — looking at both the incredible benefits and challenges we face as drones take to the skies with high definition cameras and sensors.
- The Next Wave of Big Data — How AI will transform information silos, perform advanced voice recognition, facial recognition and emotion detection
- A Look Into the Future — How the convergence of biotech, ICT, nanotechnologies and augmentation of our bodies may change what it means to be human.
Join Shara for a ride into the future where humans are increasingly integrated with the ‘net!
About Shara Evans
Technology Futurist Shara Evans is a globally acknowledged Keynote Speaker and widely regarded as one of the world’s Top Female Futurists. Highly sought after and in demand by conference producers and media, Shara provides the latest insights and thought provoking ideas on a broad spectrum of issues. Shara can be reached via her website: www.sharaevans.com
(Note: My new website will be launching in a few weeks. In the meantime, the URL automatically redirects to my company website – www.marketclarity.com.au )
After the recent presidential election, I was updating my graphs of the voter registration data and noticed that the number of registered voters decreased after the election. At first I thought that was odd, but then I realized that maybe inactive voters were being purged. I wanted to find out […]
The post Purging inactive voter registrations in North Carolina appeared first on SAS Learning Post.
The term compliance is most often associated with control. It evokes visions of restrictions, regulations and security protecting something which is to remain private. The term open is most often associated with access, and it evokes visions of an absence of restrictions, regulations and security – making something available which is […]
A common question on SAS discussion forums is how to repeat an analysis multiple times. Most programmers know that the most efficient way to analyze one model across many subsets of the data (perhaps each country or each state) is to sort the data and use a BY statement to repeat the analysis for each unique value of one or more categorical variables. But did you know that a BY-group analysis can sometimes be used to replace macro loops? This article shows how you can efficiently run hundreds or thousands of different regression models by restructuring the data.
One model: Many samples
As I've written before, BY-group analysis is also an efficient way to analyze simulated sample or bootstrapped samples. I like to tell people that you can choose "the slow way or the BY way" to analyze many samples.
In that phrase, "the slow way" refers to the act of writing a macro loop that calls a SAS procedure to analyze one sample. The statistics for all the samples are later aggregated, often by using PROC APPEND. As I (and others) have written, macro loops that call a procedure hundreds or thousands of time are relatively slow.
As a general rule, if you find yourself programming a macro loop that calls the same procedure many times, you should ask yourself whether the program can be restructured to take advantage of BY-group processing.
Stuck in a macro loop? BY-group processing can be more efficient. #SASTip
Click To Tweet
Many models: One sample
There is another application of BY-group processing, which can be incredibly useful when it is applicable. Suppose that you have wide data with many variables: Y, X1, X2, ..., X1000. Suppose further that you want to compute the 1000 single-variable regression models of the form Y=Xi, where i = 1 to 1000.
One way to run 1000 regressions would be to write a macro that contains a %DO loop that calls PROC REG 1000 times. The basic form of the macro would look like this:
%macro RunReg(DSName, NumVars); ... %do i = 1 %to &NumVars; /* repeat for each x&i */ proc reg data=&DSName noprint outest=PE(rename=(x&i=Value)); /* save parameter estimates */ model Y = x&i; /* model Y = x_i */ quit; /* ...then accumulate statistics... */ %end; %mend;
The OUTEST= option saves the parameter estimates in a data set. You can aggregate the statistics by using PROC APPEND or the DATA step.
If you use a macro loop to do this computation, it will take a long time for all the reasons stated in the article "The slow way or the BY way." Fortunately, there is a more efficient alternative.
The BY way for many models
An alternative way to analyze those 1000 regression models is to transpose the data to long form and use a BY-group analysis. Whereas the macro loop might take a few minutes to run, the BY-group method might complete in less than a second. You can download a test program and compare the time required for each method by using the link at the end of this article.
To run a BY-group analysis:
- Transpose the data from wide to long form. As part of this process, you need to create a variable (the BY-group variable) that will be unique for each model.
- Sort the data by the BY-group variable.
- Run the SAS procedure, which uses the BY statement to specify each model.
1. Transpose the data
In the following code, the explanatory variables are read into an array X. The name of each variable is stored by using the VNAME function, which returns the name of the variable that is in the i_th element of the array X. If the original data had N observations and p explanatory variables, the LONG data set contains Np observations.
/* 1. transpose from wide (Y, X1 ,...,X100) to long (varNum VarName Y Value) */ data Long; set Wide; /* <== specify data set name HERE */ array x [*] x1-x&nCont; /* <== specify explanatory variables HERE */ do varNum = 1 to dim(x); VarName = vname(x[varNum]); /* variable name in char var */ Value = x[varNum]; /* value for each variable for each obs */ output; end; drop x:; run;
2. Sort the data
In order to perform a BY-group analysis in SAS, sort the data by the BY-group variable. You can use the VARNUM variable if you want to preserve the order of the variables in the wide data. Or you can sort by the name of the variable, as done in the following call to PROC SORT:
/* 2. Sort by BY-group variable */ proc sort data=Long; by VarName; run;
3. Run the analyses
You can now call a SAS procedure one time to compute all regression models:
/* 3. Call PROC REG and use BY statement to compute all regressions */ proc reg data=Long noprint outest=PE; by VarName; model Y = Value; quit; /* Look at the results */ proc print data=PE(obs=5); var VarName Intercept Value; run;
The PE data set contains the parameter estimates for every single-variable regression of Y onto Xi. The table shows the parameter estimates for the first few models. Notice that the models are presented in the order of the BY-group variable, which for this example is the alphabetical order of the name of the explanatory variables.
You can download the complete SAS program that generates example data and runs many regressions. The program computes the regression estimates two ways: by using a macro loop (the SLOW way) and by transforming the data to long form and using BY-group analysis (the BY way).
This technique is applicable when the models all have a similar form. In this example, the models were of the form Y=Xi, but a similar result would work for GLM models such as Y=A|Xi, where A is a fixed classification variable. Of course, you could also use generalized linear models such as logistic regression.
Can you think of other ways to use this trick? Leave a comment.
The post An easy way to run thousands of regressions in SAS appeared first on The DO Loop.
Are you struggling to hire talented data scientists to glean insights from your corporate data? There’s currently a lack of big data talent hampering corporate analytics and causing nightmares for CIO’s, but I have good news for you: You may already have all the data scientists you need! There are […]
Boost your analytics: The rise of the citizen data scientist was published on SAS Voices.
People come from all over the world to attend this highlight of the season. It’s been a tradition for decades. Hotels book months in advance. Traffic is horrendous in the city center. The coveted tickets can cost thousands of dollars, but tens of thousands of people are lucky enough to score them. In […]