Tutorial #98: Calling web services using PL/SQL in Oracle

Web services are widely used to as a means of communication between two multiple devices. Oracle also allows developers to consume web services using certain built-in packages. Moreover, one can extend a relational database’s storage, indexing, and searching capabilities to include web services. Through this tutorial, we will learn how to call a simple web service using PL/SQL in Oracle. We would using a public testing platform for services utilizing JavaScript Object Notation (JSON).

Pre-requisites: Oracle 11gR2 database with *SQLPlus, Windows 7 (64 bit)

Step 1: Writing the PL/SQL query

Open the SQL Plus command prompt and login using your database credentials. Once logged in successfully execute the following command in order to enable DBMS_OUTPUT statements.

set serveroutput on

In order to call web services, Oracle provides the UTL_HTTP package. It helps make hyper-text transfer protocol (HTTP) callouts from PL/SQL and SQL. One can use it to access data on the Internet or to call Oracle Web Server cartridges. We would be creating a stored procedure to use the same. Our PL/SQL procedure would be as follows,

create or replace procedure call_webservice as
  t_http_req  utl_http.req;
  t_http_resp  utl_http.resp; 
  t_response_text VARCHAR2 (2000);

  t_http_req:= utl_http.begin_request('http://date.jsontest.com/','GET','HTTP/1.1');
  t_http_resp:= utl_http.get_response(t_http_req);
  UTL_HTTP.read_text(t_http_resp, t_response_text);
  DBMS_OUTPUT.put_line('Response> status_code: "' || t_http_resp.status_code || '"');
  DBMS_OUTPUT.put_line('Response> reason_phrase: "' ||t_http_resp.reason_phrase || '"');    
  DBMS_OUTPUT.put_line('Response> data:' ||t_response_text); 


Step 2: Executing the PL/SQL query

Now, execute the above PL/SQL procedure using the below command,

exec call_webservice

Once the procedure has executed successfully you will see the following output,


Reference: UTL_HTTP in Oracle

Tutorial #89: Execute batch file using Oracle Job Scheduler

Hey everyone!

The Oracle database includes a built-in scheduler that allows scheduling functions and procedures that can be called from any PL/SQL program. The DBMS_SCHEDULER package introduced in Oracle 10g helps to create, run and monitor jobs. In addition, one can also define a job to execute a batch (.bat) file.

In order to run an external job, you need to enable and start the OracleJobScheduler service on the database server. Through this tutorial, we will learn how to execute a batch file using Oracle Job Scheduler.

Pre-requisites: Oracle 11gR2 Enterprise Edition, Windows 7 (64 bit)

Step 1: Create a batch script

Let’s begin by creating a batch script called example1.bat. On Windows, launch any text editor like Notepad and include the following contents.


echo test > F:\programs\calling_job_from_oracle\output\sample.txt

The above batch file will output the word “test” to a text file in the output directory. The batch file will automatically create the text file upon execution.

Step 2: Create Oracle job

Open SQLPlus and log in as a user with highest privileges. Now, let’s create our job definition by executing the below commands in the console.

    SYS.DBMS_SCHEDULER.CREATE_JOB( job_name => 'example_job',
        job_type => 'EXECUTABLE',
        job_action => 'C:\WINDOWS\system32\cmd.exe',
        job_class => 'DEFAULT_JOB_CLASS',
        comments => 'Job to call batch script on Windows',
        auto_drop => FALSE,
        number_of_arguments => 3,
        enabled => FALSE);
    SYS.DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE( job_name => 'example_job', argument_position => 1, argument_value => '/q'); 
    SYS.DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE( job_name => 'example_job', argument_position => 2, argument_value => '/c'); 
    SYS.DBMS_SCHEDULER.SET_JOB_ARGUMENT_VALUE( job_name => 'example_job', argument_position => 3, argument_value => '"F:\programs\calling_job_from_oracle\example1.bat"'); 
    SYS.DBMS_SCHEDULER.ENABLE( 'example_job' ); 

As seen above, it is important to note that one can’t directly execute a batch script. Instead, we need to open an instance of the command prompt and pass an argument in order to execute the batch script. For instance, /q is the cmd.exe switch to turn off echo where as /c is the switch to terminate the same after completing the specified command. Finally, you can test the job by executing the below PL/SQL statement.

    DBMS_SCHEDULER.RUN_JOB( 'example_job');

If the job is executed successfully, you should see a sample.txt file present in the output folder. That’s it for this tutorial. Hope it helps! πŸ™‚


1. Oracle Job Scheduler

2. Running a Windows batch file from DBMS_SCHEDULER

Using virtual columns in Oracle

Not too long ago Oracle 11g introduced a new feature called virtual columns. A virtual column as the name implies is an empty column defined by an expression. The value of the column is the result of evaluation of the expression.

Pre-requisites: Oracle database 11g Enterprise Edition

The general syntax for defining a virtual column is,

column_name [datatype] [GENERATED ALWAYS] AS [expression] [VIRTUAL]

Characteristics of virtual columns are as follows,

1. Virtual columns cannot be modified using DML statements (INSERT, UPDATE and DELETE)

2. Virtual columns cannot be created on indexed, clustered or temporary tables.

3. The data present in the virtual column is not stored in the database. Instead it is dynamically evaluated.

4. All columns mentioned as part of the virtual column expression should belong to the same table.

As every coin has two sides, virtual columns too have their own limitations. However, in this post we will learn about creating and using virtual columns in the Oracle database.

Let’s create a new table called product with a virtual column profit as follows,

create table product (prodid number, prodname varchar(20), cost_price number(10,2) , selling_price number(10,2) , profit number(10,2) GENERATED ALWAYS as (selling_price -  cost_price));

SELECT column_name, data_type, data_length, data_default, virtual_column FROM user_tab_cols  WHERE table_name = 'PRODUCT';


Try inserting a few records in the product table.

insert into product(prodid,prodname,cost_price,selling_price) values (101,’abc’,500,700);
insert into product(prodid,prodname,cost_price,selling_price) values (102,’def’,600,800);
insert into product(prodid,prodname,cost_price,selling_price) values (103,’ghi’,700,900);
insert into product(prodid,prodname,cost_price,selling_price) values (104,’jkl’,800,1200);


Now, if we try inserting/updating the virtual column, we get the following error.



So basically using virtual columns Oracle enables us to store expressions directly in the base tables. One can also implement virtual columns in previous versions of Oracle such as Oracle 10g using the concept of function based indexes as specified over here.

Hope this was helpful. Thanks for reading! πŸ™‚

Using ODI 12c’s new mapping designer!

In my previous post, I went through the installation of Oracle Data Integrator 12c on the Windows platform. In earlier versions of ODI, we used Interfaces to load a single target datastore from one or more source datastores. Starting from 12c, the concept of Mappings is introduced.

A mapping comprises of a logical design and at least one physical design. A mapping can have many targets, of any technology and can be arbitrarily complex. One can build reusable mappings and use them in other mappings or other reusable mappings (somewhat similar to yellow interfaces). Let’s understand how mappings work through a simple example!

Pre-requisites: Oracle Data Integrator 12c, Oracle database 10g XE

Step 1: Create database tables

Login to any work schema and create two tables in your Oracle database as follows!

create table mappingsrc(empid int primary key, empname varchar(20));
create table mappingtarget(empid int primary key, empname varchar(20));

Add a few dummy records to the mappingsrc table only.

Step 2: Create mapping

Now, open ODI Studio and in any of your existing projects create a new mapping called test_mapping as follows!


Add the source and the target mappings in the Logical Designer.



Once you have completed the mapping, don’t forget to validate it for any errors!


Finally, run the mapping and check the status of the operation under the Operator tab.



That’s it for this post. Stay tuned for more! πŸ™‚

Installing Oracle Data Integrator 12c!

Hi everyone!

Oracle recently announced the latest version of its data integration products by introducing Oracle Data Integrator 12c and Oracle GoldenGate 12c. It is one of the biggest releases of the company this year and provides features that include improved performance, increased productivity and simplified deployment.

Through this post, we will learn how to install Oracle Data Integrator 12c on Windows. The following are some of the new features added to ODI 12c.

  • Declarative Flow-Based User Interface
  • Reusable Mappings
  • Multiple Target Support
  • Step-by-Step Debugger
  • Runtime Performance Enhancements
  • Oracle GoldenGate Integration Improvements
  • Standalone Agent Management with WebLogic Management Framework
  • Integration with OPSS Enterprise Roles
  • XML Improvements
  • Oracle Warehouse Builder Integration
  • Unique Repository IDs
  • Pre-requisites: Windows XP (32 bit) Service Pack 3

    Step 1: Download Oracle Data Integrator 12c

    Go to the downloads page for Oracle Data Integrator and download version 12c for all platforms (including 64 bit).

    Step 2: Run installer

    Once you have downloaded the software, you need to run the installer to begin the setup process. Follow the below steps for better understanding!






    This completes the installation. Now open ODI Studio from within the application directory. Further, you can choose to store passwords in Wallet.




    Reference: Oracle blog

    Determine initial load processes in Oracle GoldenGate!

    Hi everyone!

    One of the interesting features of Oracle GoldenGate is obtaining the information of any Extract or Replicat process used during the initial load. An initial load process involves extracting the records of a table from a source database to a table in the target database. After the data is synchronized, one can configure processes to capture changes, pump the data to target and finally replicate the data.

    In scenarios where GoldenGate is already configured for data replication, we need to obtain the information about the extract and the replicat process used for initial data load. Through this post, we will learn how to obtain the same using a few GGSCI commands.

    Pre-requisites: Oracle GoldenGate 11g for Windows (64 bit)

    Step 1: Open the GGSCI interface


    Step 2: Create an extract named ext1 as follows!

    EXTRACT ext1
    USERID ggs_owner, PASSWORD ggs_owner
    TABLE ggs_owner.TEST1;

    As seen above, we have used the parameter SOURCEISTABLE that extracts all column data specified within TABLE statement. This parameter is applied to initial load methods such as loading data from file to Replicat and loading data from file to database utility. When used, SOURCEISTABLE must be the first parameter in the Extract file.

    Alternatively, one can specify the same as an argument to the ADD EXTRACT command.

    Step 3: Create a replicat named rep6 as follows!

    REPLICAT rep6
    USERID ggs_owner, PASSWORD ggs_owner
    MAP ggs_owner.*, TARGET ggs_target.*;

    As seen above, we have added the SPECIALRUN parameter in the replicat file. It is used for certain initial loads and ensures that the replicat is started automatically instead of manually using the START REPLICAT command.


    Step 4: Start the manager process


    Step 5: View information about processes

    Now, one can view the information about the processes used for initial load using the following command!

    info all tasks


    Credits: This post wouldn’t have been possible without the support and help of Mr. Pankaj Singh (Oracle DBA)

    Tutorial #60: Load data from Oracle table to XML using OdiSqlUnload

    Hi everyone!

    It’s time for yet another ODI tutorial. Through this post, we will learn how to export the data from an Oracle table to an XML file. To implement this task we would be using the built in tool called OdiSqlUnload that comes shipped with Oracle Data Integrator.

    Pre-requisites: Oracle Data Integrator 11g (build version

    Step 1: Create table in Oracle

    Create a new table called Department in your work repository and insert a few dummy values!

    create table Department (DeptId int primary key, DeptName varchar(20));
    insert into Department values(201, 'Computers');
    insert into Department values(202,'Electronics');
    insert into Department values(203,'Mech');

    Step 2: Create XML file

    Make a new blank XML file called sample.xml. Do not add any contents to it as it would be done using a built in tool. πŸ™‚

    Step 3: Create Package and use OdiSqlUnload

    Open ODI Studio and connect to your work repository. Create a new package named table_to_xml in any of your existing projects!

    Now, drag and drop the OdiSqlUnload tool under the Files section.


    Add the required details and finally save and run the package. Check the status of the operation under the Operator tab. If the operation is successful you should see the XML file now populated with data from your Oracle table.




    <?xml version="1.0" encoding="ISO-8859-1"?>
    <!-- File generated at 16/10/13 8:03 PM -->

    Scheduling tasks in Oracle database

    Hello everyone!

    Today, we will learn how to schedule a particular task in Oracle. Starting from Oracle 10g, the SCHEDULER package provides a collection of scheduling functions and procedures that are callable from any PL/SQL program. The package also provides the ability to execute jobs with user-specific privileges/roles.

    Pre-requisites: Oracle database 10g Express Edition

    Let’s create a simple PL/SQL procedure that we are going to schedule!

    Login to either SYS or SYSTEM (or any user that has the right privileges for scheduling) and execute the following commands.

    create or replace procedure greeting
    dbms_output.put_line('Hello world');

    In order to schedule the above procedure, we first need to create a new schedule for the same using the DBMS_SCHEDULER package.

      DBMS_SCHEDULER.create_job (
        job_name        => 'TASK1',
        job_type        => 'STORED_PROCEDURE',
        job_action      => 'greeting',
        start_date      => SYSTIMESTAMP,
        repeat_interval => 'freq=minutely; byminute=1',
        end_date        => NULL,
        enabled         => TRUE,
        comments        => 'my first PL/SQL task');

    Now, run the execute command to start the job!

    execute dbms_scheduler.run_job('task1');

    In order to view the job description including status, name etc we run the following SQL command.

    SQL> select log_date, job_name, status, run_duration
      2  from dba_scheduler_job_run_details where job_name='TASK1';

    You will observe that the job has been executed successfully. In this way, you can schedule PL/SQL functions, triggers etc to run at a specific time as well. Hope this helps! πŸ™‚


    Demystifying agents in Oracle Data Integrator!

    Hello folks!

    A little background

    Agents are basically services written in Java running on a specific TCP/IP port that carry out the execution of Interfaces, Packages etc. In previous versions of ODI namely ODI 10g one only had the standalone agents at his disposal. Ever since ODI 11g was introduced, the concept of Java EE agents came into the picture. Overall, both these agents are (almost) the same piece of code. The differences in them is what we will learn ahead!

    The tale of two Agents

    A Standalone agent will run on top of a Java machine or any machine with a JVM implementation. It is easier to deploy and we can create multiple such agents. The question that arises is how many would you keep on creating? In an environment that demands scalability, standalone agents would eventually fail. In such situations we use Java EE agents. Java EE agents are deployed as a Weblogic server application. Unlike standalone agents, they utilize the clustering and connection pooling features of the Weblogic server. They are a bit complex to deploy but promise to offer enterprise wide scalability. Standalone agents use the built-in ODI load balancing feature and do not support clustering and connection pooling.


    Let’s say you want to run an ODI Scenario that populates hundreds of records from the source table to the target table. You create two or three Standalone agents running on different ports (load balanced) that carry out the execution of the scenario. Now, the agent always first connects to the master repository, obtains the connection information to the work repository, connects to the work repository and then finally executes the Scenario. Each time the standalone agent tries to connect, it creates a new connection request to the database. As opening/closing of database is an expensive process, at some point in time there might exist a connection overhead resulting in slow down of the operation. A better option then would be to use a Java EE agent that is by default highly scalable, available and supports connection pooling.


    What can we conclude?

    Finally, it is the user that has to decide which type of Agent is required. It is quite possible for both the agents to exist simultaneously in the same ODI architecture. The main difference is how you install them, in WLS or on top of JVM.

    Source: What’s new in ODI 11g – Part 1

    Understanding execution plan and cost in Oracle

    Hello everyone!

    In Oracle an execution plan is created internally every time we fire an SQL query on the database. The plan thus created depends upon the type of query optimizer used in your Oracle database. For example, Oracle 10g by default uses the CBO (cost based optimizer) that generated the plan by computing the table statistics. On the other hand we have the RBO (rule based optimizer) that uses a set of pre-defined rules to create the plan.

    How do I get the execution plan?

    In Oracle it is possible to generate the execution plan for any SQL query. Using *SQL Plus, one can run the set autotrace command and then execute your SQL statement to see the execution plan as follows!

    SQL> set autotrace traceonly explain;
    SQL> select EmpName from Employee where EmpId=103;
    Execution Plan
    Plan hash value: 3075387830
    | Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)
    | Time     |
    |   0 | SELECT STATEMENT            |              |     1 |     9 |     1   (0)
    | 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| EMPLOYEE     |     1 |     9 |     1   (0)
    | 00:00:01 |
    |*  2 |   INDEX UNIQUE SCAN         | SYS_C0013049 |     1 |       |     0   (0)
    | 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("EMPID"=103)

    As seen above the execution plan is obtained for the query. Another way to obtain the execution plan is by using the EXPLAIN PLAN statement as follows!

    SQL> analyze table Employee compute statistics;
    Table analyzed.
    SQL> explain plan for select EmpName from Employee where EmpId=103;
    SQL> select * from table(dbms_xplan.display);
    Plan hash value: 3075387830
    | Id  | Operation                   | Name         | Rows  | Bytes | Cost (%CPU)
    | Time     |
    |   0 | SELECT STATEMENT            |              |     1 |     9 |     1   (0)
    | 00:00:01 |
    |   1 |  TABLE ACCESS BY INDEX ROWID| EMPLOYEE     |     1 |     9 |     1   (0)
    | 00:00:01 |
    |*  2 |   INDEX UNIQUE SCAN         | SYS_C0013049 |     1 |       |     0   (0)
    | 00:00:01 |
    Predicate Information (identified by operation id):
       2 - access("EMPID"=103)
    14 rows selected.

    How do I determine the execution cost?

    If you observe closely, the plan generated includes a COST column. The COST column cannot be used to compare execution plans. Many people have the misconception that if a plan has a higher cost than another plan, it is worse. This isn’t true at all. This has been explained in detail over here.

    The COST column is comparable provided you are evaluating two plans for the same SQL at the same time on the same database with the same parameter settings. It is possible to even write your own SQL script to generate execution plans. This post by Jonathan Lewis gives you a head start.

    SQL> select cost from plan_table;
    Execution Plan
    Plan hash value: 103984305
    | Id  | Operation         | Name        | Rows  | Bytes | Cost (%CPU)| Time
    |   0 | SELECT STATEMENT  |             |     3 |    39 |     2   (0)| 00:00:01
    |   1 |  TABLE ACCESS FULL| PLAN_TABLE$ |     3 |    39 |     2   (0)| 00:00:01
       - dynamic sampling used for this statement (level=2)

    Hope this Oracle database tip helps you in some way! Stay tuned for more! πŸ™‚