Oracle Database

Number of rows in each Table – Various Databases

Posted on Updated on

A possible common task developers perform is to find out how many records exists in every table in a schema. In the examples below I’ll show examples for the current schema of the developer, but these can be expanded easily to include tables in other schemas or for all schemas across a database.

These example include the different ways of determining this information across the main databases including Oracle, MySQL, Postgres, SQL Server and Snowflake.

A little warning before using these queries. They may or may not give the true accurate number of records in the tables. These examples illustrate extracting the number of records from the data dictionaries of the databases. This is dependent on background processes being run to gather this information. These background processes run from time to time, anything from a few minutes to many tens of minutes. So, these results are good indication of the number of records in each table.

Oracle

SELECT table_name, num_rows
FROM user_tables
ORDER BY num_rows DESC;

or

SELECT table_name,
       to_number(
          extractvalue(xmltype(
             dbms_xmlgen.getxml('select count(*) c from '||table_name))
                ,'/ROWSET/ROW/C')) tab_count
FROM   user_tables
ORDER BY tab_count desc;

Using PL/SQL we can do something like the following.

DECLARE
   val NUMBER;
BEGIN
   FOR i IN (SELECT table_name FROM user_tables ORDER BY table_name desc) LOOP
      EXECUTE IMMEDIATE 'SELECT count(*) FROM '|| i.table_name INTO val;
      DBMS_OUTPUT.PUT_LINE(i.table_name||' -> '|| val );
   END LOOP;
END;

MySQL

SELECT table_name, table_rows
FROM INFORMATION_SCHEMA.TABLES
WHERE table_type = 'BASE TABLE'
AND  TABLE_SCHEMA = current_user();

Using Common Table Expressions (CTE), using WITH clause

WITH table_list AS (
SELECT
    table_name
  FROM information_schema.tables 
  WHERE table_schema = current_user() 
  AND
        table_type = 'BASE TABLE'
) 
SELECT CONCAT(
            GROUP_CONCAT(CONCAT("SELECT '",table_name,"' table_name,COUNT(*) rows FROM ",table_name) SEPARATOR " UNION "),
            ' ORDER BY table_name'
        )
INTO @sql
FROM table_list;

Postgres

select relname as table_name, n_live_tup as num_rows from pg_stat_user_tables;

An alternative is

select n.nspname as table_schema,
       c.relname as table_name,
       c.reltuples as rows
from pg_class c
join pg_namespace n on n.oid = c.relnamespace
where c.relkind = 'r'
      and n.nspname = current_user
order by c.reltuples desc;

SQL Server

SELECT tab.name, 
       sum(par.rows) 
FROM   sys.tables tab INNER JOIN sys.partitions par 
          ON tab.object_id = par.object_id 
WHERE  schema_name(tab.schema_id) = current_user

Snowflake

SELECT t.table_schema, t.table_name, t.row_count 
FROM   information_schema.tables t
WHERE  t.table_type = 'BASE TABLE'
AND    t.table_schema = current_user
order by t.row_count desc;

The examples give above are some of the ways to obtain this information. As with most things, there can be multiple ways of doing it, and most people will have their preferred one which is based on their background and preferences.

As you can see from the code given above they are all very similar, with similar syntax, etc. The only thing different is the name of the data dictionary table/view containing the information needed. Yes, knowing what data dictionary views to query can be a little challenging as you switch between databases.

Advertisement

AUTO_PARTITION – Inspecting & Implementing Recommendations

Posted on Updated on

In a previous blog post I gave an overview of the DBMS_AUTO_PARTITION package in Oracle Autonomous Database. This looked at how you can get started and to setup Auto Partitioning and to allow it to automatically implement partitioning.

This might not be something the DBAs will want to happen for lots of different reasons. An alternative is to use DBMS_AUTO_PARTITION to make recommendations for tables where partitioning will have a performance improvement. The DBA can inspect these recommendations and decide which of these to implement.

In the previous post we set the CONFIGURE function to be ‘IMPLEMENT’. We need to change that to report the recommendations.

exec dbms_auto_partition.configure('AUTO_PARTITION_MODE','REPORT ONLY');

Just remember, tables will only be considered by AUTO_PARTITION as outlined in my previous post.

Next we can ask for recommendations using the RECOMMEND_PARTITION_METHOD function.

exec  dbms_auto_partition.recommend_partition_method(
table_owner => 'WHISKEY',
table_name => 'DIRECTIONS',
report_type => 'TEXT',
report_section => 'ALL',
report_level => 'ALL');

The results from this are stored in DBA_AUTO_PARTITION_RECOMMENDATIONS, which you can query to view the recommendations.

select recommendation_id, partition_method, partition_key
from dba_auto_partition_recommendations;
RECOMMENDATION_ID                PARTITION_METHOD                                                                                                        PARTITION_KEY
-------------------------------- ------------------------------------------------------------------------------------------------------------- --------------
D28FC3CF09DF1E1DE053D010000ABEA6 Method: LIST(SYS_OP_INTERVAL_HIGH_BOUND("D", INTERVAL '2' MONTH, TIMESTAMP '2019-08-10 00:00:00')) AUTOMATIC  D

To apply the recommendation pass the RECOMMENDATION_KEY value to the APPLY_RECOMMENDATION function.

exec dbms_auto_partition.apply_recommendation('D28FC3CF09DF1E1DE053D010000ABEA6');

It might takes some minutes for the partitioned table to become available. During this time the original table will remain available as the change will be implemented using a ALTER TABLE MODIFY PARTITION ONLINE command.

Two other functions include REPORT_ACTIVITY and REPORT_LAST_ACTIVITY. These can be used to export a detailed report on the recommendations in text or HTML form. It is probably a good idea to create and download these for your change records.

spool autoPartitionFinding.html
select dbms_auto_partition.report_last_activity(type=>'HTML') from dual;
exit;

AUTO_PARTITION – Basic setup

Posted on Updated on

Partitioning is an effective way to improve performance of SQL queries on large volumes of data in a database table. But only so, if a bit of care and attention is taken by both the DBA and Developer (or someone with both of these roles). Care is needed on the database side to ensure the correct partitioning method is deployed and the management of these partitions, as some partitioning methods can create a significantly large number of partitions, which in turn can affect the management of these and possibly performance too, which is not what you want. Care is also needed from the developer side to ensure their code is written in a way that utilises the partitioning method deployed. If doesn’t then you may not see much improvement in performance of your queries, and somethings things can run slower. Which not one wants!

With the Oracle Autonomous Database we have the expectation it will ‘manage’ a lot of the performance features behind the scenes without the need for the DBA and Developing getting involved (‘Autonomous’). This is kind of true up to a point, as the serverless approach can work up to a point. Sometimes a little human input is needed to give a guiding hand to the Autonomous engine to help/guide it towards what data needs particular focus.

In this (blog post) case we will have a look at DBMS_AUTO_PARTITION and how you can do a basic setup, config and enablement. I’ll have another post that will look at the recommendation feature of DBMS_AUTO_PARTITION. Just a quick reminder, DBMS_AUTO_PARTITION is for the Oracle Autonomous Database (ADB) (on the Cloud). You’ll need to run the following as ADMIN user.

The first step is to enable auto partitioning on the ADB using the CONFIGURE function. This function can have three parameters:

  • IMPLEMENT : generates a report and implements the recommended partitioning method. (Autonomous!)
  • REPORT_ONLY : {default} reports recommendations for partitioning on tables
  • OFF : Turns off auto partitioning (reporting and implementing)

For example, to enable auto partitioning and to automatically implement the recommended partitioning method.

exec DBMS_AUTO_PARTITION.CONFIGURE('AUTO_PARTITION_MODE', 'IMPLEMENT');

The changes can be inspected in the DBA_AUTO_PARTITION_CONFIG view.

SELECT * FROM DBA_AUTO_PARTITION_CONFIG;

When you look at the listed from the above select we can see IMPLEMENT is enabled

The next step with using DBMS_AUTO_PARTITION is to tell the ADB what schemas and/or tables to include for auto partitioning. This first example shows how to turn on auto partitioning for a particular schema, and to allow the auto partitioning (engine) to determine what is needed and to just go and implement that it thinks is the best partitioning methods.

exec DBMS_AUTO_PARTITION.CONFIGURE(
   parameter_name => 'AUTO_PARTITION_SCHEMA', 
   parameter_value => 'WHISKEY', 
   ALLOW => TRUE);

If you query the DBA view again we now get.

We have not enabled a schema (called WHISKEY) to be included as part of the auto partitioning engine.

Auto Partitioning may not do anything for a little while, with some reports suggesting to wait for 15 minutes for the database to pick up any changes and to make suggestions. But there are some conditions for a table needs to meet before it can be considered, this is referred to as being a ‘Candidate’. These conditions include:

  • Table passes inclusion and exclusion tests specified by AUTO_PARTITION_SCHEMA and AUTO_PARTITION_TABLE configuration parameters.
  • Table exists and has up-to-date statistics.
  • Table is at least 64 GB.
  • Table has 5 or more queries in the SQL tuning set that scanned the table.
  • Table does not contain a LONG data type column.
  • Table is not manually partitioned.
  • Table is not an external table, an internal/external hybrid table, a temporary table, an index-organized table, or a clustered table.
  • Table does not have a domain index or bitmap join index.
  • Table is not an advance queuing, materialized view, or flashback archive storage table.
  • Table does not have nested tables, or certain other object features.

If you find Auto Partitioning isn’t partitioning your tables (i.e. not a valid Candidate) it could be because the table isn’t meeting the above list of conditions.

This can be verified using the VALIDATE_CANDIDATE_TABLE function.

select DBMS_AUTO_PARTITION.VALIDATE_CANDIDATE_TABLE( 
   table_owner => 'WHISKEY',
   table_name => 'DIRECTIONS')
from dual;

If the table has met the above list of conditions, the above query will return ‘VALID’, otherwise one or more of the above conditions have not been met, and the query will return ‘INVALID:’ followed by one or more reasons

Check out my other blog post on using the AUTO_PARTITION to explore it’s recommendations and how to implement.

Running Oracle Database on Docker on Apple M1 Chip

Posted on Updated on

This post is for you if you have an Apple M1 laptop and cannot get Oracle Database to run on Docker.

The reason Oracle Database, and lots of other software, doesn’t run on the new Apple Silicon is their new chip uses a different instructions set to what is used by Intel chips. Most of the Database vendors have come out to say they will not be porting their Databases to the M1 chip, as most/all servers out there run on x86 chips, and the cost of porting is just not worth it, as there is zero customers.

Are you using an x86 Chip computer (Windows or Macs with intel chips)? If so, follow these instructions (and ignore this post)

If you have been using Apple for your laptop for some time and have recently upgraded, you are now using the M1 chip, you have probably found some of your software doesn’t run. In my scenario (and with many other people) you can no longer run an Oracle Database 😦

But there does seem to be a possible solution and this has been highlighted by Tom de Vroomen on his blog. A work around is to spin up an x86 container using Colima. Tom has given some instructions on his blog, and what I list below is an extended set of instructions to get fully setup and running with Oracle on Docker on M1 chip.

1-Install Homebrew

You might have Homebrew installed, but if not run the following to install.

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

2-Install colima

You can not install Colima using Homebrew. This might take a minute or two to run.

brew install colima

3-Start colima x86 container

With Colima installed, we can now start an x86 container.

colima start --arch x86_64 --memory 4

The container will be based on x86, which is the important part of what we need. The memory is 4GB, but you can probably drop that a little.

The above command should start within a second or two.

4-Install Oracle Database for Docker

The following command will create an Oracle Database docker image using the image created by Gerald Venzi.

docker run -d -p 1521:1521 -e ORACLE_PASSWORD=<your password> -v oracle-volume:/opt/oracle/oradata gvenzl/oracle-xe

23c Database – If you want to use the 23c Database, Check out this post for the command to install

I changed <your password> to SysPassword1.

This will create the docker image and will allow for any changes to the database to be persisted after you shutdown docker. This is what you want to happen.

5-Log-in to Oracle as System

Open the docker client to see if the Oracle Database image is running. If not click on the run button.

When it finishes starting up, open the command line (see icon to the left of the run button), and log in as the SYSTEM user.

sqlplus system/SysPassword1@//localhost/XEPDB1

You are now running Oracle Database on Docker on an M1 chip laptop 🙂

6-Create new user

You shouldn’t use the System user, as that is like using root for everything. You’ll need to create a new user/schema in the database for you to use for your work. Run the following.

create user brendan identified by BTPassword1 default tablespace users
grant connect, resource to brendan;

If these run without any errors you now have your own schema in the Oracle Database on Docker (on M1 chip)

7-Connect using SQL*Plus & SQL Developer

Now let’s connect to the schema using sqlplus.

sqlplus brendan/BTPassword1@//localhost/XEPDB1

That should work for you and you can now proceed using the command line tool.

If you refer to use a GUI tool then go install SQL Developer. Jeff Smith has a blog post about installing SQL Developer on M1 chip. Here is the connection screen with all the connection details entered (using the username and password given/used above)

You can now use the command line as well as SQL Developer to connect to your Oracle Database (on docker on M1).

8-Stop Docker and Colima

After you have finished using the Oracle Database on Docker you will want to shut it down until the next time you want to use it. There are two steps to follow. The first is to stop the Docker image. Just go to the Docker Desktop and click on the Stop button. It might take a few seconds for it to shutdown.

The second thing you need to do is to stop Colima.

colima stop

That’s it all done.

9-What you need to run the next time (and every time after that)

For the second and subsequent time you want to use the Oracle Docker image all you need to do is the following

(a) Start Colima

colima start --arch x86_64 --memory 4

(b) Start Oracle on Docker

Open Docker Desktop and click on the Run button [see Docker Desktop image above]

And to stop everything

(a) Stop the Oracle Database on Docker Desktop

(b) Stop Colima by running ‘colima stop’ in a terminal

Oracle Database In-Memory – simple example

Posted on Updated on

In a previous post, I showed how to enable and increase the memory allocation for use by Oracle In-Memory. That example was based on using the Pre-built VM supplied by Oracle.

To use In-Memory on your objects, you have a few options.

Enabling the In-Memory attribute on the EXAMPLE tablespace by specifying the INMEMORY attribute

SQL> ALTER TABLESPACE example INMEMORY;

Enabling the In-Memory attribute on the sales table but excluding the “prod_id” column

SQL> ALTER TABLE sales INMEMORY NO INMEMORY(prod_id);

Disabling the In-Memory attribute on one partition of the sales table by specifying the NO INMEMORY clause

SQL> ALTER TABLE sales MODIFY PARTITION SALES_Q1_1998 NO INMEMORY;

Enabling the In-Memory attribute on the customers table with a priority level of critical

SQL> ALTER TABLE customers INMEMORY PRIORITY CRITICAL;

You can also specify the priority level, which helps to prioritise the order the objects are loaded into memory.

A simple example to illustrate the effect of using In-Memory versus not.

Create a table with, say, 11K records. It doesn’t really matter what columns and data are.

Now select all the records and display the explain plan

select count(*) from test_inmemory;

Now, move the table to In-Memory and rerun your query.

alter table test_inmemory inmemory PRIORITY critical;  
select count(*) from test_inmemory;  -- again

There you go!

We can check to see what object are In-Memory by

SELECT table_name, inmemory, inmemory_priority, inmemory_distribute,      
       inmemory_compression, inmemory_duplicate 
FROM user_tables 
WHERE inmemory = 'ENABLED’ 
ORDER BY table_name;

To remove the object from In-Memory

SQL > alter table test_inmemory no inmemory;  -- remove the table from in-memory

This is just a simple test and lots of other things can be done to improve performance

But, you do need to be careful about using In-Memory. It does have some limitations and scenarios where it doesn’t work so well. So care is needed

Changing In-Memory size in Oracle Database

Posted on Updated on

The pre-built virtual machine provided by Oracle for trying out and playing with Oracle Database comes configured to use the In-Memory option. But memory size is a little limited if you are trying to load anything slightly bigger than a tiny table into memory, for example if the table has more than a few hundred rows.

The amount of memory allocated to In-Memory can be increased to allow for more data to be loaded. There is a requirement that the VM and Database has enough memory allocated to allow this. If you don’t and increase the In-Memory size too large, you will have some problems restarting the database and VM. So proceed carefully.

For the pre-built VM, I typically allocate 4G or 8G of RAM to the VM. This in turn will give more memory to the database when it starts.

To setup In-Memory on the VM run the following:

– Open a terminal window and run this command:

sqlplus sys/oracle as sysdba

Then run these two commands

alter session set container = cdb$root;
alter system set inmemory_size = 200M scope=spfile;

Now, bounce the VM, i.e. restart the VM

In-memory will now be enabled on your Database, and you can now create/move tables in and out of in-memory.

Oracle 18c XE Docker setup

Posted on Updated on

During August (2021) Gerald Venzi of Oracle released a new set of Docker images and these included Oracle 18c XE Database. Check out Gerald’s blog post about this for a lot more details on these images. Great work Gerald, and it’s way simpler to set this up compared to previous.

The following is really just a reminder to myself of the commands needed to install and run one of the 18c XE docker images.

Gerald has provided 3 different versions of 18c XE Database. Check out his blog post for more details of what is included/excluded in each image.

I decided to go with the FULL docker image (oracle-xe-full), just because I use most of the DB features and like to play around with the rest. If you just want a Database then go with the medium or small sized docker images

Docker Image Name Description
oracle-xe-full Contains full Oracle 18c XE Database installation. Containing all the bells and whistles. This is the largest docker image.
oralce-xeThis medium sized image has some things stripped out from the installation. Contains most of the functionality from the full image, but some of the edge case functionality has been removed.
oracle-xe-slimThis is the smallest image and has a lot of extra features remove. Probably only suitable if you want a basic Database.

Before you run the following commands you will need to install Docker.

Step 1: Download the 18c XE image

docker pull gvenzl/oracle-xe

Step 2: Check the image exist in your Docker env

docker images

Step 3: Run the image

docker run -d -p 1521:1521 -e ORACLE_PASSWORD=SysPassword1 -v oracle-volume:/opt/oracle/oradata gvenzl/oracle-xe

This command remaps the 1521 port to local 1521, changed/set the password and gives volume details to all any changes to the database and image to be persisted i.e. when you restart the image your previous work will be there

Step 4: Rename image [you can skip this step if you want. I just wanted a different name]

docker ps
docker rename d95a3db95747 18XE 

NB: Use the code/reference for your docker image. It will be different to mine (d95a3db95747)

Step 5: Connect to the Database as DBA/Admin schema

You can use SQL*Plus or some other client side tool to connect to the database

sqlplus system/SysPassword1@//localhost/XEPDB1

A simple query to check we are connected to the database.

select username from dba_users;

Step 6: Create your own (developer) Schema

create user demo identified by demo quota unlimited on users;
grant connect, resource to demo;

Exit SQL*Plus and log back into the Database using the DEMO schema you just created.

connect demo/demo@//localhost/XEPDB1

Step 7: Create a Table and enter some Records

create table test (col1 NUMBER, col2 VARCHAR2(10));
insert into test values (1, 'Brendan');

Step 8: Test the Docker image persists the data

Stop the docker image

docker stop 18XE

Check it is no-longer running

docker ps

Nothing will be displayed

Step 9: Start the 18XE Docker image and Check data was persisted

docker start 18XE
docker ps

You should see the docker image is running

sqlplus demo/demo@//localhost/XEPDB1
select table_name from user_tables;
select * from test;

These last two commands should show the table and the record in the table. This means the data was persisted.

All done you now have a working Docker image of Oracle 18XE running.

Just remember to stop the image when you don’t need it on your computer. These will save you some resource usage.

GoLang: Inserting records into Oracle Database using goracle

Posted on Updated on

In this blog post I’ll give some examples of how to process data for inserting into a table in an Oracle Database. I’ve had some previous blog posts on how to setup and connecting to an Oracle Database, and another on retrieving data from an Oracle Database and the importance of setting the Array Fetch Size.

When manipulating data the statements can be grouped (generally) into creating new data and updating existing data.

When working with this kind of processing we need to avoid the creation of the statements as a concatenation of strings. This opens the possibility of SQL injection, plus we are not allowing the optimizer in the database to do it’s thing. Prepared statements allows for the reuse of execution plans and this in turn can speed up our data processing and applications.

In a previous blog post I gave a simple example of a prepared statement for querying data and then using it to pass in different values as a parameter to this statement.

dbQuery, err := db.Prepare("select cust_first_name, cust_last_name, cust_city from sh.customers where cust_gender = :1")  
if err != nil {
    fmt.Println(err) 
    return  
}  
defer dbQuery.Close() 

rows, err := dbQuery.Query('M')  
if err != nil { 
    fmt.Println(".....Error processing query") 
    fmt.Println(err) 
    return  
}  
defer rows.Close()  

var CustFname, CustSname,CustCity string 
for rows.Next() {  
    rows.Scan(&CustFname, &CustSname, &CustCity) 
    fmt.Println(CustFname, CustSname, CustCity)  
}

For prepared statements for inserting data we can follow a similar structure. In the following example a table call LAST_CONTACT is used. This table has columns:

  • CUST_ID
  • CON_METHOD
  • CON_MESSAGE
_, err := db.Exec("insert into LAST_CONTACT(cust_id, con_method, con_message) VALUES(:1, :2, :3)", 1, "Phone", "First contact with customer")
if err != nil {
    fmt.Println(".....Error Inserting data") 
    fmt.Println(err) 
    return
}

an alternative is the following and allows us to get some additional information about what was done and the result from it. In this example we can get the number records processed.

stmt, err := db.Prepare("insert into LAST_CONTACT(cust_id, con_method, con_message) VALUES(:1, :2, :3)") 
if err != nil { 
    fmt.Println(err) 
    return 
}

res, err := dbQuery.Query(1, "Phone", "First contact with customer")  
if err != nil { 
    fmt.Println(".....Error Inserting data") 
    fmt.Println(err) 
    return  
} 

rowCnt := res.RowsAffected()
fmt.Println(rowCnt, " rows inserted.")

A similar approach can be taken for updating and deleting records

Managing Transactions

With transaction, a number of statements needs to be processed as a unit. For example, in double entry book keeping we have two inserts. One Credit insert and one debit insert. To do this we can define the start of a transaction using db.Begin() and the end of the transaction with a Commit(). Here is an example were we insert two contact details.

// start the transaction
transx, err := db.Begin()
if err != nil {
    fmt.Println(err) 
    return  
}

// Insert first record
_, err := db.Exec("insert into LAST_CONTACT(cust_id, con_method, con_message) VALUES(:1, :2, :3)", 1, "Email", "First Email with customer") 
if err != nil { 
    fmt.Println(".....Error Inserting data - first statement") 
    fmt.Println(err) 
    return 
}
// Insert second record
_, err := db.Exec("insert into LAST_CONTACT(cust_id, con_method, con_message) VALUES(:1, :2, :3)", 1, "In-Person", "First In-Person with customer") 
if err != nil { 
    fmt.Println(".....Error Inserting data - second statement") 
    fmt.Println(err) 
    return 
}

// complete the transaction
err = transx.Commit()
if err != nil {
    fmt.Println(".....Error Committing Transaction") 
    fmt.Println(err) 
    return 
}

 

Importance of setting Fetched Rows size for Database Query using Golang

Posted on Updated on

When issuing queries to the database one of the challenges every developer faces is how to get the results quickly. If your queries are only returning a small number of records, eg. < 5, then you don’t really have to worry about execution time. That is unless your query is performing some complex processing, joining lots of tables, etc.

Most of the time developers are working with one or a small number of records, using a simple query. Everything runs quickly.

But what if your query is returning several tens or thousands of records. Assuming we have a simple query and no query optimization is needed, the challenge facing the developer is how can you get all of those records quickly into your environment and process them. Typically the database gets blamed for the query result set being returned slowly. But what if this wasn’t the case? In most cases developers take the default parameter settings of the functions and libraries. For database connection libraries and their functions, you can change some of the parameters and affect how your code, your query, gets executed on the Database server and can affect how quickly the data is shipped from the database to your code.

One very important parameter to consider is the query array size. This is the number of records the database will send to your code in each batch. The database will keep sending batches until you tell it to stop. It makes sense to have the size of this batch set to a small value, as most queries return one or a small number of records. But when we get onto returning a larger number of records it can affect the response time significantly.

I tested the effect of changing the size of the returning buffer/array using Golang and querying data in an Oracle Database, hosted on Oracle Cloud, and using goracle library to connect to the database.

[ I did a similar test using Python. The results can be found here. You will notices that Golang is significantly quicker than Python, as you would expect. ]

The database table being queried contains 55,000 records and I just executed a SELECT * FROM … on this table. The results shown below contain the timing the query took to process this data for different buffer/array sizes by setting the FetchRowCount value.

rows, err := db.Query(dbQuery, goracle.FetchRowCount(arraySize))

Screenshot 2019-05-22 14.52.48

As you can see, as the size of the buffer/array size increases the timing it takes to process the data drops. This is because the buffer/array is returning a larger number of records, and this results in a reduced number of round trips to/from the database i.e. fewer packets of records are sent across the network.

The challenge for the developer is to work out the optimal number to set for the buffer/array size. The default for the goracle libary, using Oracle client is 256 row/records.

When that above query is run, without the FetchRowCount setting, it will use this default 256 value. When this is used we get the following timings.

Screenshot 2019-05-22 15.00.00

We can see, for the data set being used in this test case the optimal setting needs to be around 1,500.

What if we set the parameter to be very large?  That would no necessarily make it quicker. You can see from the first table the timing starts to increase for the last two settings. There is an overhead in gathering and sending the data.

Here is a subset of the Golang code I used to perform the tests.

var currentTime = time.Now()

var i int

var custId int

arrayOne := [11] int{5, 10, 30, 50, 100, 200, 500, 1000, 1500, 2000, 2500}


currentTime = time.Now()


fmt.Println("Array Size = ", arraySize, " : ", currentTime.Format("03:04:05:06 PM"))


for index, arraySize := range arrayOne {

    currentTime = time.Now()

    fmt.Println(index, " Array Size = ", arraySize, " : ", currentTime.Format("03:04:05:06 PM"))


    db, err := sql.Open("goracle", username+"/"+password+"@"+host+"/"+database)

    if err != nil {

        fmt.Println("... DB Setup Failed")

        fmt.Println(err)

        return

    }

    defer db.Close()



    if err = db.Ping(); err != nil {

        fmt.Printf("Error connecting to the database: %s\n", err)

        return

    }



    currentTime = time.Now()

    fmt.Println("...Executing Query", currentTime.Format("03:04:05:06 PM"))

    dbQuery := "select cust_id from sh.customers"

    rows, err := db.Query(dbQuery, goracle.FetchRowCount(arraySize))

    if err != nil {

        fmt.Println(".....Error processing query")

        fmt.Println(err)

        return

    }

    defer rows.Close()



    i = 0

    currentTime = time.Now()

    fmt.Println("... Parsing query results", currentTime.Format("03:04:05:06 PM"))
 
   for rows.Next() {

        rows.Scan(&custId)

        i++

        if i% 10000 == 0 {

            currentTime = time.Now()

            fmt.Println("...... ",i, " customers processed", currentTime.Format("03:04:05:06 PM"))

        }

    }


    currentTime = time.Now()

    fmt.Println(i, " customers processed", currentTime.Format("03:04:05:06 PM"))


    fmt.Println("... Closing connection")

    finishTime := time.Now()

    fmt.Println("Finished at ", finishTime.Format("03:04:05:06 PM"))


}

 

 

 

 

Connecting Go Lang to Oracle Database

Posted on Updated on

It seems like more and more people are using Go. With that comes the need to  access a database or databases. This blog will show you how to get connected to an Oracle Database and to perform some basic operations using Go.

The first thing you need is to have Go installed. There are a couple of options for you. The first is go download from the Go Lang website, or if you are an Oracle purist they provide some repositories for you and these can be installed using yum.

Next you need to install Oracle Instant Client.

Screenshot 2019-04-22 10.48.42

Unzip the contents of the downloaded file. Copy the extracted directory (and it’s contents) to your preferred location and then add the path to this directory to the search PATH. Depending on your configuration and setup, you may need to configure some environment variables. Some people report having to create a ‘.pc’ file and having to change the symlinks for libraries. I didn’t have to do any of this.

The final preparation steps, after installing Go and Oracle Instant Client, is to download the ‘goracle’ package. This package provides a GO database/sql driver for connecting to Oracle Database using ODPI-C. To install the ‘goracle’ package run:

go get gopkg.in/goracle.v2

This takes a few seconds to run. There is no display updates or progress updates when this command is running.

NB: goracle package has been renamed to godror. Everything works the same it’s just the package has been renamed.

See below for the full Go code for connecting to Oracle Database, executing a query and gathering some database information. To this code, with file name ‘ora_db.go’

go run ora_db.go

I’ll now break this code down into steps.

Import the packages that will be used in you application. In this example I’m importing four packages. ‘fmt’ is the formatting package and allows us to write to standard output. the ‘time’ package allows us to capture and print some date, time and how long things take. The ‘database/sql’ package to allow SQL execution and is needed for the final package ‘goracle’.

import (
    "fmt"
    "time"
    "database/sql"
    goracle "gopkg.in/goracle.v2"
)

Next we can define the values needed for connecting the Oracle Database. These include the username, password, the host string and the database name.

    username := "odm_user";
    password := "odm_user";
    host := ".....";
    database := "....";

Now test the database connection. This doesn’t actually create a connection. This is deferred until you run the first command against the database. This tests the connection

db, err := sql.Open("goracle", username+"/"+password+"@"+host+"/"+database)
if err != nil {
    fmt.Println("... DB Setup Failed") 
    fmt.Println(err)
    return
}
defer db.Close()

If an error is detected, the error message will be printed and the application will exit (return). the ‘defer db.Close’ command sets up to close the connection, but defers it to the end of the application. This allows you to keep related code together and avoid having to remember to add the close command at the end of your code.

Now force the connection to open using a Ping command

if err = db.Ping(); err != nil {
    fmt.Println("Error connecting to the database: %s\n", err)
    return
}

Our database connection is now open!

The ‘goracle’ package allows us to get some metadata about the connection, such as details of the client and server configuration. Here we just gather the details of what version of the database we are connected to. The Oracle Database I’m using is 18c Extreme Edition host on Oracle Cloud.

var serverVersion goracle.VersionInfo 
serverVersion, err = goracle.ServerVersion(db);
if err != nil {
    fmt.Printf("Error getting Database Information: %s\n", err)
    return
}
fmt.Println("DB Version : ",serverVersion)

First we define the variable used to store the server details in. This is defined with data type as specified in the ‘goracle’ package. Then gather the server version details, check for an error and finally print out the details.

To execute a query, we define the query (dbQuery) and then use the connection (db) to run this query (db.Query). The variable ‘rows’ points to the result set from the query. Then defer the closing of the results set point. We need to keep this results set open, as we will parse it in the next code segment.

dbQuery := "select table_name from user_tables where table_name not like 'DM$%' and table_name not like 'ODMR$%'"
rows, err := db.Query(dbQuery)
if err != nil {
   fmt.Println(".....Error processing query")
   fmt.Println(err)
   return
}
defer rows.Close()

To parse the result set, we can use a FOR loop. Before the loop we define a variable to contain the value returned from the result set (tableName). The FOR loop will extract each row returned and assign the value returned to the variable tableName. This variable is then printed.

var tableName string
for rows.Next() {
   rows.Scan(&tableName)
   fmt.Println(tableName)
}

That’s it.

We have connected to Oracle Database, retrieved the version of the database, executed a query and processed the result set.

Here is the full code and the output from running it.

package main

import (
    "fmt"
    "time"
    "database/sql"
    goracle "gopkg.in/goracle.v2"
)

func main(){
    username := "odm_user";
    password := "odm_user";
    host := ".....";
    database := "....";

    currentTime := time.Now()
    fmt.Println("Starting at : ", currentTime.Format("03:04:05:06 PM"))

    fmt.Println("... Setting up Database Connection") 
    db, err := sql.Open("goracle", username+"/"+password+"@"+host+"/"+database)
    if err != nil {
        fmt.Println("... DB Setup Failed") 
        fmt.Println(err)
        return
    }
    defer db.Close()

    fmt.Println("... Opening Database Connection") 
    if err = db.Ping(); err != nil {
        fmt.Println("Error connecting to the database: %s\n", err)
        return
    }
    fmt.Println("... Connected to Database")

    var serverVersion goracle.VersionInfo 
    serverVersion, err = goracle.ServerVersion(db);
    if err != nil {
        fmt.Printf("Error getting Database Information: %s\n", err)
        return
    }
    fmt.Println("DB Version : ",serverVersion)

    dbQuery := "select table_name from user_tables where table_name not like 'DM$%' and table_name not like 'ODMR$%'"
    rows, err := db.Query(dbQuery)
    if err != nil {
        fmt.Println(".....Error processing query")
        fmt.Println(err)
        return
    }
    defer rows.Close()

    fmt.Println("... Parsing query results") 
    var tableName string
    for rows.Next() {
        rows.Scan(&tableName)
        fmt.Println(tableName)
    }

    fmt.Println("... Closing connection") 
    finishTime := time.Now()
    fmt.Println("Finished at ", finishTime.Format("03:04:05:06 PM"))
}

Screenshot 2019-04-22 11.30.10

Installing and configuring Oracle 18c XE

Posted on

The following are the simple steps required to install Oracle 18c XE (express edition) on Oracle Linux. Check out my previous blog post on Oracle 18c XE. Also check out the product webpage for more details and updates. There is a very important word on that webpage. That word is ‘FREE’ and is something you don’t see too often. Go get and use the (all most) full enterprise version of the Oracle Database.

I’ve created a VM using Oracle Linux for the OS.

After setting up the VM, login as root and download the RPM file.

NewImage

Run the following as root to perform dependency checks and configurations.

yum install -y oracle-database-preinstall-18c

You can now run the install using the following command.

yum -y localinstall oracle-database-xe-18c-1.0-1.x86_64.rpm.

When the install has completed, the next step is to install the database. This is done using the following command.

/etc/init.d/oracle-xe-18c configure

You will be prompted to enter a common password for the SYS, SYSTEM and PDBADMIN users. You will need to change these at a later time.

Then to start the database, run

systemctl start oracle-xe-18c

The next time you restart the VM, you might find that the database hasn’t started or loaded. You will need to do this manually. This is a bit of a pain in the behind.

To avoid having to do this each time, run the following commands as root.

systemctl daemon-reload
systemctl enable oracle-xe-18c

These commands will allow the database to be shutdown when the machine or VM is being shutdown and will automatically start up the database when the machine/VM startups again.

The final step is to connect to the database

sqlplus sys///localhost:1521/XE as sysdba

You can then go and perform all your typical admin tasks, set up SQLDeveloper, and create additional users.

Bingo! All it good now.

Docker

Putting Oracle 18c XE on docker is an excellent way to make it easily deployable and to build out solutions that require a DB.

Check out these links for instructions on how to setup a Docker container with Oracle 18c XE.

https://github.com/fuzziebrain/docker-oracle-xe