PL/SQL
DBMS_PREDICTIVE_ANALYTICS & Explain
There are 2 PL/SQL packages for performing data mining/predictive analytics in Oracle. The main PL/SQL package is DBMS_DATA_MINING. This package allows you to build data mining models and to apply them to new data. But there is another PL/SQL package.
The DBMS_PREDICTIVE_ANALYTICS package is very different to the DBMS_DATA_MINING package. The DBMS_PREDICTIVE_ANALYTICS package includes routines for predictive analytics, an automated form of data mining. With predictive analytics, you do not need to be aware of model building or scoring. All mining activities are handled internally by the predictive analytics procedure.
Predictive analytics routines prepare the data, build a model, score the model, and return the results of model scoring. Before exiting, they delete the model and supporting objects.
The package comes with the following functions: EXPLAIN, PREDICT and PROFILE. To get some of details about these functions we can run the following in SQL.
This blog post will look at the EXPLAIN function.
EXPLAIN creates an attribute importance model. Attribute importance uses the Minimum Description Length algorithm to determine the relative importance of attributes in predicting a target value. EXPLAIN returns a list of attributes ranked in relative order of their impact on the prediction. This information is derived from the model details for the attribute importance model.
Attribute importance models are not scored against new data. They simply return information (model details) about the data you provide.
I’ve written two previous blog posts on Attribute Importance. One of these was on how to calculate Attribute Importance using the Oracle Data Miner tool. In the ODM tool it is now called Feature Selection and is part of the Filter Columns node and the Attribute Importance model is not persisted in the database. The second blog post was how you can create the Attribute Importance using the DBMS_DATA_MINING package.
EXPLAIN ranks attributes in order of influence in explaining a target column.
The syntax of the function is
DBMS_PREDICTIVE_ANALYTICS.EXPLAIN (
data_table_name IN VARCHAR2,
explain_column_name IN VARCHAR2,
result_table_name IN VARCHAR2,
data_schema_name IN VARCHAR2 DEFAULT NULL);
where
data_table_name = Name of input table or view
explain_column_name = Name of column to be explained
result_table_name = Name of table where results are saved. It creates a new table in your schema.
data_schema_name = Name of schema where the input table or view resides. Default: the current schema.
So when calling the function you do not have to include the last parameter.
Using the same example what I have given in the previous blog posts (see about for the links to these) the following command can be run to generate the Attribute Importance.
BEGIN
DBMS_PREDICTIVE_ANALYTICS.EXPLAIN(
data_table_name => ‘mining_data_build_v’,
explain_column_name => ‘affinity_card’,
result_table_name => ‘PA_EXPLAIN’);
END;
One thing that stands out is that it is a bit slower to run than the DBMS_DATA_MINING method. On my laptop it took approx. twice to three time longer to run. But in total it was less than a minute.
To display the results,
The results are ranked in a 0 to 1 range. Any attribute that had a negative value are set to zero.
Part 1–Getting started with Statistics for Oracle Data Science projects
With all analytics or data science projects one of the first steps typically involves the extraction of data from various sources, merging the data and then performing various statistics.
The extraction and merging of the data is well covered by lots of other people blogging about how to use Oracle Data Integration (ODI), Oracle Warehouse Builder (OWB), among many others.
What I’m going to look at in this series of blog posts will be what statistical functions you might look at using in the Oracle and how to use them.
- This the first blog post in the series will look at the DBMS_STAT_FUNCS PL/SQL package, what it can be used for and I give some sample code on how to use it in your data science projects. I also give some sample code that I typically run to gather some additional stats.
- The second blog post will look at some of the other statistical functions that exist in SQL that you will/may use regularly in your data science projects.
- The third blog post will provide a summary of the other statistical functions that exist in the database.
These statistical functions can be grouped into 2 main types. The first is the descriptive statistics that are available by using the DBMS_STAT_FUNCS PL/SQL package and then there is the extensive list of other SQL stats functions. It is worth mentioning at this point that all these stats packages and functions come as standard in the database (i.e. they are FREE, you do not have to pay for an add on option for the database to use them). So once you have you Oracle database installed you can start using them. There is no need to spend money buying another stats package to do stats. All you need to know is some SQL and what the stats functions are.
DBMS_STAT_FUNCS
One of the stats package that I use a lot is the SUMMARY function. This is part of the DBMS_STAT_FUNC PL/SQL package. This package calculates a number of common statistics for an attribute in a table. Yes that’s correct, it only gather statistics for just one attribute. So you will have to run it for all the numeric attributes in the table.
For does people who are familiar with the Oracle Data Miner tool, the explore data node produces a lot of these statistics that the SUMMARY function produces. See below for details of how to produce the Histograms.
The SUMMARY function has the following parameters
Although you will probably be running this this function on the data in your schema you still have to give the schema name. The table name is the name of the table where the data exists, the column name is the name of the column that contains the actual data you want to analyse, and the ‘s’ is the record that will be returned by the SUMMARY function that contains all the summary information.
An example of the basic script to run the SUMMARY function is given below. It will use the data that is available in the sample schemas and the views that where setup for the Oracle Data Mining sample schemas. The table (or in this case the view) that we are going to use is the MINING_DATA_BUILD_V. What we are going to do is to replicate some of what the Explore Node does in the Oracle Data Miner tool.
set serveroutput on
declare
s DBMS_STAT_FUNCS.SummaryType;
begin
DBMS_STAT_FUNCS.SUMMARY(‘DMUSER’, ‘MINING_DATA_BUILD_V’, ‘AGE’, 3, s);
dbms_output.put_line(‘SUMMARY STATISTICS’);
dbms_output.put_line(‘Count : ‘||s.count);
dbms_output.put_line(‘Min : ‘||s.min);
dbms_output.put_line(‘Max : ‘||s.max);
dbms_output.put_line(‘Range : ‘||s.range);
dbms_output.put_line(‘Mean : ‘||round(s.mean));
dbms_output.put_line(‘Mode Count : ‘||s.cmode.count);
dbms_output.put_line(‘Mode : ‘||s.cmode(1));
dbms_output.put_line(‘Variance : ‘||round(s.variance));
dbms_output.put_line(‘Stddev : ‘||round(s.stddev));
dbms_output.put_line(‘Quantile 5 : ‘||s.quantile_5);
dbms_output.put_line(‘Quantile 25 : ‘||s.quantile_25);
dbms_output.put_line(‘Median : ‘||s.median);
dbms_output.put_line(‘Quantile 75 : ‘||s.quantile_75);
dbms_output.put_line(‘Quantile 95 : ‘||s.quantile_95);
dbms_output.put_line(‘Extreme Count : ‘||s.extreme_values.count);
dbms_output.put_line(‘Extremes : ‘||s.extreme_values(1));
dbms_output.put_line(‘Top 5 : ‘||s.top_5_values(1)||’,’||
s.top_5_values(2)||’,’||
s.top_5_values(3)||’,’||
s.top_5_values(4)||’,’||
s.top_5_values(5));
dbms_output.put_line(‘Bottom 5 : ‘||s.bottom_5_values(5)||’,’||
s.bottom_5_values(4)||’,’||
s.bottom_5_values(3)||’,’||
s.bottom_5_values(2)||’,’||
s.bottom_5_values(1));
end;
/
We can compare this to what is produced by the Explore Node in ODM
We can see that the Explore Node gives us more statistics to help us with understanding the data.
What Statistics does the Explore Node produce
We can see the actual SQL code that the Explore Node runs to get the statistics that are displayed in the Explore Node View Data window. To do this you will need to right-click on the Explore Node and move the mouse down to the Deploy option. The submenu will open and select ‘SQL to Clipboard’ from the list. Open a text editor and past the code. You will need to tidy up some of this code to point it at the actual data source you want. You will get the following
SELECT /*+ inline */ ATTR,
DATA_TYPE,
NULL_PERCENT,
DISTINCT_CNT,
DISTINCT_PERCENT,
MODE_VALUE,
AVG,
MIN,
MAX,
STD,
VAR,
SKEWNESS,
KURTOSIS,
HISTOGRAMS
FROM OUTPUT_1_23;
Where OUTPUT_1_23 is a working table that ODM has created to store intermediate results from some of its processing. In this case the Explore Node. You will need to change this to the ODM working table in your schema.
This query does not perform any of the statistics gathering. It just presents the results.
Creating our own Statistics gathering script – Part 1
The attribute names in the above SQL query tells us what statistics functions within Oracle that were used. We can replicate this statistics gathering task using the following script. There are four parts to this script. The first part gathers most of the common statistics for the attribute. The second and third parts calculate the Skewness and Kurtosis for the attribute. The final (fourth) part combines the first three parts and lists the outputs.
The one statistic function that we are not including at this point is the Histogram information. I will cover this in the next (second) blog post on statistics.
The following script has the data source table/view name included (MINING_DATA_BUILD_V) and the attribute we are going to use (AGE). You will need to modify this script to run it for each attribute.
WITH
basic_statistics AS (select (sum(CASE WHEN age IS NULL THEN 1 ELSE 0 END)/COUNT(*))*100 null_percent,
count(*) num_value,
count(distinct age) distinct_count,
(count(distinct age)/count(*))*100 distinct_percent,
avg(age) avg_value,
min(age) min_value,
max(age) max_value,
stddev(age) std_value,
stats_mode(age) mode_value,
variance(age) var_value
from mining_data_build_v),
skewness AS (select avg(SV) S_value
from (select power((age – avg(age) over ())/stddev(age) over (), 3) SV
from mining_data_build_v) ),
kurtosis AS (select avg(KV) K_value
from (select power((age – avg(age) over ())/stddev(age) over (), 4) KV
from mining_data_build_v) )
SELECT null_percent,
num_value,
distinct_percent,
avg_value,
min_value,
max_value,
std_value,
mode_value,
var_value,
S_value,
K_value
from basic_statistics,
skewness,
kurtosis;
Part 2 – Lets do it for all the attributes in a table
In the code above I’ve shown how you can gather the statistics for one particular attribute of one table.But in with an analytics project you will want to gather the statistics on all the attributes.
What we can do is to take the code above and put it into a procedure. This procedure accepts a table name as input, loops through the attributes for this table and calculates the various statistics. The statistics are saved in a table called DATA_STATS (see below).
drop table data_stats;
create table DATA_STATS (
table_name VARCHAR2(30) NOT NULL,
column_name VARCHAR2(30) NOT NULL,
data_type VARCHAR2(106) NOT NULL,
data_length NUMBER,
data_percision NUMBER,
data_scale NUMBER,
num_records NUMBER,
distinct_count NUMBER,
null_percent NUMBER,
distinct_percent NUMBER,
avg_value NUMBER,
min_value NUMBER,
max_value NUMBER,
std_value NUMBER,
mode_value VARCHAR2(1000),
var_value NUMBER,
s_value NUMBER,
k_value NUMBER,
PRIMARY KEY (table_name, column_name));
This is one of the first things that I do when I start on a new project. I create the DATA_STATS table and run my procedure GATHER_DATA_STATS for each table that we will be using. By doing this it allows me to have a permanent records of the stats for each attribute and saves me time in having to rerun various stats at different points of the project. I can also use these stats to produces some additional stats or to produce some graphs.
He is the code for the GATHER_DATA_STATS procedure.
CREATE OR REPLACE PROCEDURE gather_data_stats(p_table_name IN varchar2) AS
cursor c_attributes (c_table_name varchar2)
is SELECT table_name,
column_name,
data_type,
data_length,
data_precision,
data_scale
FROM user_tab_columns
WHERE table_name = upper(c_table_name);
v_sql NUMBER;
v_rows NUMBER;
BEGIN
dbms_output.put_line(‘Starting to gather statistics for ‘||upper(p_table_name)||’ at ‘||to_char(sysdate,’DD-MON-YY HH24:MI:SS’));
FOR r_att in c_attributes(p_table_name) LOOP
—
— remove any previously generated stats
—
v_sql := DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(v_sql, ‘delete from DATA_STATS where table_name = ”’||r_att.table_name||”’ and column_name = ”’||r_att.column_name||””, DBMS_SQL.NATIVE);
v_rows := DBMS_SQL.EXECUTE(v_sql);
–dbms_output.put_line(‘delete from DATA_STATS where table_name = ”’||r_att.table_name||”’ and column_name = ”’||r_att.column_name||””);
IF r_att.data_type = ‘NUMBER’ THEN
dbms_output.put_line(r_att.table_name||’ : ‘||r_att.column_name||’ : ‘||r_att.data_type);
—
— setup the insert statement and execute
—
v_sql := DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(v_sql, ‘insert into data_stats select ”’||r_att.table_name||”’, ”’||r_att.column_name||”’, ”’||r_att.data_type||”’, ‘||r_att.data_length||’, ‘||nvl(r_att.data_precision,0)||’, ‘||nvl(r_att.data_scale,0)||’, count(*) num_value, (sum(CASE WHEN ‘||r_att.column_name||’ IS NULL THEN 1 ELSE 0 END)/COUNT(*))*100 null_percent, count(distinct ‘||r_att.column_name||’) distinct_count, (count(distinct ‘||r_att.column_name||’)/count(*))*100 distinct_percent, avg(‘||r_att.column_name||’) avg_value, min(‘||r_att.column_name||’) min_value, max(‘||r_att.column_name||’) max_value, stddev(‘||r_att.column_name||’) std_value, stats_mode(‘||r_att.column_name||’) mode_value, variance(‘||r_att.column_name||’) var_value, null, null from ‘|| r_att.table_name, DBMS_SQL.NATIVE);
v_rows := DBMS_SQL.EXECUTE(v_sql);
ELSIF r_att.data_type IN (‘CHAR’, ‘VARCHAR’, ‘VARCHAR2’) THEN
dbms_output.put_line(r_att.table_name||’ : ‘||r_att.column_name||’ : ‘||r_att.data_type);
—
— We need to gather a smaller number of stats for the character attributes
—
v_sql := DBMS_SQL.OPEN_CURSOR;
begin
DBMS_SQL.PARSE(v_sql, ‘insert into data_stats select ”’||r_att.table_name||”’, ”’||r_att.column_name||”’, ”’||r_att.data_type||”’, ‘||r_att.data_length||’, ‘||nvl(r_att.data_precision,0)||’, ‘||nvl(r_att.data_scale,0)||’, count(*) num_value, (sum(CASE WHEN ‘||r_att.column_name||’ IS NULL THEN 1 ELSE 0 END)/COUNT(*))*100 null_percent, count(distinct ‘||r_att.column_name||’) distinct_count, (count(distinct ‘||r_att.column_name||’)/count(*))*100 distinct_percent, null, null, null, null, stats_mode(‘||r_att.column_name||’) mode_value, null, null, null from ‘|| r_att.table_name, DBMS_SQL.NATIVE);
v_rows := DBMS_SQL.EXECUTE(v_sql);
— dbms_output.put_line(‘insert into data_stats select ”’||r_att.table_name||”’, ”’||r_att.column_name||”’, ”’||r_att.data_type||”’, ‘||r_att.data_length||’, ‘||nvl(r_att.data_precision,0)||’, ‘||nvl(r_att.data_scale,0)||’, count(*) num_value, (sum(CASE WHEN ‘||r_att.column_name||’ IS NULL THEN 1 ELSE 0 END)/COUNT(*))*100 null_percent, count(distinct ‘||r_att.column_name||’) distinct_count, (count(distinct ‘||r_att.column_name||’)/count(*))*100 distinct_percent, null, null, null, null, stats_mode(‘||r_att.column_name||’) mode_value, null, null, null from ‘|| r_att.table_name);
exception
when others then
dbms_output.put_line(v_rows);
end;
ELSE
dbms_output.put_line(‘Unable to gather statistics for ‘||r_att.column_name||’ with data type of ‘||r_att.data_type||’.’);
END IF;
END LOOP;
dbms_output.put_line(‘Finished gathering statistics for ‘||upper(p_table_name)||’ at ‘||to_char(sysdate,’DD-MON-YY HH24:MI:SS’));
commit;
END;
Then to run it for a table:
exec gather_data_stats(‘mining_data_build_v’);
We can view the contents of the DATA_STATS table by executing the following in SQL*Plus or SQL Developer
select * from DATA_STATS;
OUG Norway April 2013 – New Year’s News
I received an email at 23:24 on the 1st January from the OUG in Norway telling me that I’ve had two presentations accepted for the Annual OUG Norway seminar event. This will be on during the 17th-19th April.
The first day of this event (17th April) will be held in a hotel in Oslo. Then on the morning of 18th April we board the Color Magic cruise for the next two days of the conference. The ferry/cruise will go from Oslo to Kiel in Germany and then back again to Oslo, returning around 10am on Saturday 20th April.
I will be giving two presentations on the Oracle Advanced Analytics Option. The first presentation, ‘Using Predictive Analytics in Oracle’, will give an overview of the Oracle Advanced Analytics Option and will then focus on the Oracle Data Miner work-flow tool. This will presentation will include a live demo of using Oracle Data Miner to create some data mining models.
The second presentation, ‘How to Deploy and Use your Oracle Data Miner Models in Production’, builds on the examples given in the first presentation and will show how you can migrate, user and update your Oracle Data Miner models using the features available in SQL and PL/SQL. Again a demo will be given.
Update on : Adding numbers between
Over the past few days I’ve had a number of emails and comments based on my previous post. My previous post was called ‘Adding numbers between two values’. I included some PL/SQL code that can be used to add up the numbers between two values. I mentioned that this was a question that my pre-teen son (a few year pre-teen) had asked me.
There are two main solutions to the same problem. One involves just using a SELECT and the other involves using recursion. I will come back the these alternative solutions below.
But let me start off with a bit more detail and background to why I approached the problem the way that I did. The main reason is that my son is a pre-teen. Over the past couple of years he as expressed an interest in what his daddy does. We even have matching ORACLENERD t-shirts ![]()
When I was working through the problem with my son I wanted to show him how to take a problem and by breaking it down into its different parts we can work out an overall solution. We can then take each of these parts and translate them into code. In this case some PL/SQL, yes it is a bit nerdy and we do have the t-shirt. The code that I gave illustrates many different parts of the language and hopefully he will use some of these features as we continue on our learning experience.
It is good sometimes to break a problem down into smaller parts. That way we can understand it better, what works and what does not work, if something does not work then we will know what bit and also leads to easier maintenance. At a later point as you develop an in-depth knowledge of certain features of a language you can then rewrite what you have to be more efficient.
All part of the learning experience.
Ok lets take a look at the other ways to answer this problem. The first approach is to just use a single SELECT statement.
SELECT sum(rownum + &&Start_Number – 1)
FROM dual
CONNECT by level <= &End_Number – &&Start_Number + 1;
An even simpler way is
SELECT sum(level)
FROM dual
CONNECT BY level between &Start_Number and &End_Number;
These queries create a hierarchical query that produce all the numbers between the Start_Number parameter and the End_Number parameter. The SUM is needed to all all the numbers/rows produced. This is nice and simple (but not that easy for by son at this point).
Thank you to everyone who contacted me about this. I really appreciated your feedback and please keep your comments coming for all my posts.
Adding numbers between two values
My son asked me the other day, what is the total number if you add all the numbers between Zero and 100.
We could have sat down to work it out with some paper and a pen, but instead I decided to introduce him to the world of Oracle, SQL and PL/SQL
The first step we took was to work out how you would do it on paper for some of the numbers. Then we translated this into some PL/SQL code. OK I did a lot this but he did seem to understand and follow what I was doing.
So the following Function is what we ended up with to add all the numbers between two numbers and return the answer.
CREATE or REPLACE function AddNumsBetween
( pStartNum IN NUMBER,
pEndNum IN NUMBER)
RETURN NUMBER
IS
vSum Number := 0;
BEGIN
FOR i IN pStartNum .. pEndNum LOOP
vSum := vSum + i;
END LOOP;
return vSum;
END;
/
The next step was to write some code to call this function. The code prompts the user to enter the Start number and End number.
set serveroutput on
DECLARE
vStartNum NUMBER := 0;
vEndNum NUMBER := 100;
vAnswer NUMBER := 0;
BEGIN
vStartNum := &Start_Number;
vEndNum := &End_Number;
vAnswer := AddNumsBetween(vStartNum, vEndNum);
dbms_output.put_line(‘The sum of numbers between ‘||vStartNum||’ and ‘||vEndNum||’ is ‘||vAnswer||’.’);
END;
/
To answer by son’s original query, we used Zero and 100 as our inputs.![]()
The answer to the question is 5,050.
ASCII to character conversion in Oracle
Here is code code that will produce formatted output of the characters and their ascii values. The formatting is broken into lower case letters, uppercase letters, characters with an ascii value less than an ‘a’ and characters whose ascii characters are greater than a ‘z.
Code
set serveroutput on FORMAT WRAPPED
DECLARE
vTab VARCHAR2(5) := CHR(9);
vNum NUMBER := 0;
vString VARCHAR2(80) := ”;
BEGIN
—
— Formatted lower case letter to ASCII values
—
dbms_output.put_line(‘Formatted Lower Case Letters to ASCII values’);
dbms_output.put_line(‘——————————————————-‘);
FOR i IN ASCII(‘a’) .. ASCII(‘z’) LOOP
IF vNum < 6 THEN
vString := vString||CHR(i)||’ : ‘||i||vTab;
vNum := vNum + 1;
ELSIF vNum = 6 then
dbms_output.put_line(vString||CHR(i)||’ : ‘||i);
vNum := 0;
vString := ”;
ELSE
dbms_output.put_line(‘ERROR’);
END IF;
END LOOP;
dbms_output.put_line(vString);
—
— Formatted upper case letter to ASCII values
—
vString := ”;
vNum := 0;
dbms_output.new_line;
dbms_output.new_line;
dbms_output.put_line(‘Formatted Upper Case Letters to ASCII values’);
dbms_output.put_line(‘——————————————————-‘);
FOR i IN ASCII(‘A’) .. ASCII(‘Z’) LOOP
IF vNum < 6 THEN
vString := vString||CHR(i)||’ : ‘||i||vTab;
vNum := vNum + 1;
ELSIF vNum = 6 then
dbms_output.put_line(vString||CHR(i)||’ : ‘||i);
vNum := 0;
vString := ”;
ELSE
dbms_output.put_line(‘ERROR’);
END IF;
END LOOP;
dbms_output.put_line(vString);
—
— Formatted chars less than ‘a’ to ASCII values
—
vString := ”;
vNum := 0;
dbms_output.new_line;
dbms_output.new_line;
dbms_output.put_line(‘Formatted Letters, less than a to ASCII values’);
dbms_output.put_line(‘——————————————————-‘);
FOR i in 0 .. ASCII(‘a’)-1 LOOP
IF vNum < 6 THEN
vString := vString||CHR(i)||’ : ‘||i||vTab;
vNum := vNum + 1;
ELSIF vNum = 6 then
dbms_output.put_line(vString||CHR(i)||’ : ‘||i);
vNum := 0;
vString := ”;
ELSE
dbms_output.put_line(‘ERROR’);
END IF;
END LOOP;
dbms_output.put_line(vString);
—
— Formatted chars greater than ‘Z’ to ASCII values
—
vNum := 0;
vString := ”;
dbms_output.new_line;
dbms_output.new_line;
dbms_output.put_line(‘Formatted Letters, greater than z to ASCII values’);
dbms_output.put_line(‘——————————————————-‘);
FOR i IN ASCII(‘z’) .. ASCII(‘z’)+133 LOOP
IF vNum < 6 THEN
vString := vString||CHR(i)||’ : ‘||i||vTab;
vNum := vNum + 1;
ELSIF vNum = 6 then
dbms_output.put_line(vString||CHR(i)||’ : ‘||i);
vNum := 0;
vString := ”;
ELSE
dbms_output.put_line(‘ERROR’);
END IF;
END LOOP;
dbms_output.put_line(vString);
END;
/
Output
Formatted Lower Case Letters to ASCII values
——————————————————-
a : 97 b : 98 c : 99 d : 100 e : 101 f : 102 g : 103
h : 104 i : 105 j : 106 k : 107 l : 108 m : 109 n : 110
o : 111 p : 112 q : 113 r : 114 s : 115 t : 116 u : 117
v : 118 w : 119 x : 120 y : 121 z : 122
Formatted Upper Case Letters to ASCII values
——————————————————-
A : 65 B : 66 C : 67 D : 68 E : 69 F : 70 G : 71
H : 72 I : 73 J : 74 K : 75 L : 76 M : 77 N : 78
O : 79 P : 80 Q : 81 R : 82 S : 83 T : 84 U : 85
V : 86 W : 87 X : 88 Y : 89 Z : 90
Formatted Letters, less than a to ASCII values
——————————————————-
: 0 ☺ : 1 ☻ : 2 ♥ : 3 ♦ : 4 ♣ : 5 ♠ : 6
: 7 : 8 : 9
: 13 ♂ : 11 ♀ : 12
♫ : 14 ☼ : 15 ► : 16 ◄ : 17 ↕ : 18 ‼ : 19 ¶ : 20
§ : 21 ▬ : 22 ↨ : 23 ↑ : 24 ↓ : 25 → : 26 ← : 27
∟ : 28 ↔ : 29 ▲ : 30 ▼ : 31 : 32 ! : 33 ” : 34
# : 35 $ : 36 % : 37 & : 38 ‘ : 39 ( : 40 ) : 41
* : 42 + : 43 , : 44 – : 45 . : 46 / : 47 0 : 48
1 : 49 2 : 50 3 : 51 4 : 52 5 : 53 6 : 54 7 : 55
8 : 56 9 : 57 : : 58 ; : 59 : 62
? : 63 @ : 64 A : 65 B : 66 C : 67 D : 68 E : 69
F : 70 G : 71 H : 72 I : 73 J : 74 K : 75 L : 76
M : 77 N : 78 O : 79 P : 80 Q : 81 R : 82 S : 83
T : 84 U : 85 V : 86 W : 87 X : 88 Y : 89 Z : 90
[ : 91 \ : 92 ] : 93 ^ : 94 _ : 95 ` : 96
Formatted Letters, greater than z to ASCII values
——————————————————-
z : 122 { : 123 | : 124 } : 125 ~ : 126 ⌂ : 127 Ç : 128
ü : 129 é : 130 â : 131 ä : 132 à : 133 å : 134 ç : 135
ê : 136 ë : 137 è : 138 ï : 139 î : 140 ì : 141 Ä : 142
Å : 143 É : 144 æ : 145 Æ : 146 ô : 147 ö : 148 ò : 149
û : 150 ù : 151 ÿ : 152 Ö : 153 Ü : 154 ø : 155 £ : 156
Ø : 157 × : 158 ƒ : 159 á : 160 í : 161 ó : 162 ú : 163
ñ : 164 Ñ : 165 ª : 166 º : 167 ¿ : 168 ® : 169 ¬ : 170
½ : 171 ¼ : 172 ¡ : 173 « : 174 » : 175 ░ : 176 ▒ : 177
▓ : 178 │ : 179 ┤ : 180 Á : 181 Â : 182 À : 183 © : 184
╣ : 185 ║ : 186 ╗ : 187 ╝ : 188 ¢ : 189 ¥ : 190 ┐ : 191
└ : 192 ┴ : 193 ┬ : 194 ├ : 195 ─ : 196 ┼ : 197 ã : 198
à : 199 ╚ : 200 ╔ : 201 ╩ : 202 ╦ : 203 ╠ : 204 ═ : 205
╬ : 206 ¤ : 207 ð : 208 Ð : 209 Ê : 210 Ë : 211 È : 212
ı : 213 Í : 214 Î : 215 Ï : 216 ┘ : 217 ┌ : 218 █ : 219
▄ : 220 ¦ : 221 Ì : 222 ▀ : 223 Ó : 224 ß : 225 Ô : 226
Ò : 227 õ : 228 Õ : 229 µ : 230 þ : 231 Þ : 232 Ú : 233
Û : 234 Ù : 235 ý : 236 Ý : 237 ¯ : 238 ´ : 239 : 240
± : 241 ‗ : 242 ¾ : 243 ¶ : 244 § : 245 ÷ : 246 ¸ : 247
° : 248 ¨ : 249 · : 250 ¹ : 251 ³ : 252 ² : 253 ■ : 254
: 255
PL/SQL procedure successfully completed.
Observations
There are two things that stand out in this. The first is there is sound produced. This is because one of the characters is defined this way. It is ASCII number 7. This can be repeated using the following:
select chr(7) from dual.
The second is the formatting of the lines for ascii codes 8 to 12. We can see that one of the ascii codes does not get displayed and the ordering of this is not as expected. This is due to ascii 10 being a line feed.
- ← Previous
- 1
- 2