Skip to Main Content

Export APEX workspace and schema in Oracle cloud

You can create APEX workspaces, applications and schemas exports from database using PL/SQL. This might be useful e.g. if you working on Oracle cloud always free autonomous databases. In this example we use always free autonomous database workload type APEX and store exports to object storage bucket in the Oracle cloud. You can setup object versioning and lifecycle policy rules e.g. to move old versions to infrequent access or archive storage and delete old versions after some period. You can also create database job to schedule exports.

First create new compartment to isolate needed cloud resources.

Login to your Oracle cloud tenancy, navigate to “Identity & Security” → “Compartments” and click “Create Compartment”.

Compartments page

Enter name and description, select root compartment as “Parent Compartment” and click “Create Compartment”.

Create Compartment popup

Once the compartment is created, it will appear in the list.

Compartments page

We need to add a policy to the object storage service order to use lifecycle policy rules in the bucket.

Navigate to “Identity & Security” → “Policies”, select root compartment and click “Create Policy”.

Policies page

Enter name, description, change “Show manual editor” to on and enter policy statements.

Allow service objectstorage-<oci region identifier> to manage object-family in tenancy

Note! Replace part <oci region identifier> in statement with your region identifier. 

Create policy popup

Once the policy is created, you can see its details.

Policy detail page

Next create object storage bucket and set lifecycle policy rules for the bucket.

Navigate to “Storage” → “Buckets”, select compartment you created earlier and click “Create Bucket”.

Object storage & archive storage page

Enter bucket name, check “Enable Object Versioning” and click “Create”.

Create bucket popup

Once the bucket is created, it will appear in the list.

Click the bucket name to view its detail.

Object storage & archive storage page

Note down namespace as you will need it later.

From left side menu select “Lifecycle Policy Rules” and click “Create Rule”.

Bucket lifecycle policy rules page

Enter name, select “Previous Versions of Objects”, set “Lifecycle Action” to “Move Infrequent Access”, set “Number of Days” and click “Create”.

Create lifecycle rule popupCreate lifecycle rule popup

Once the lifecycle policy rule is created, it will appear in the list.

Click “Create Rule” to create another rule.

Bucket lifecycle policy rules page

Enter name, select “Previous Versions of Objects”, set “Lifecycle Action” to “Delete”, set “Number of Days” and click “Create”.

Create lifecycle rule popup

Once the lifecycle policy rule is created, it will appear in the list.

Bucket lifecycle policy rules page

Next create group, policy and user to access objects storage bucket.

Navigate to “Identity & Security” → “Groups” and click “Create Group”.

Groups page

Enter name, description and click “Create”.

Create group popup

Once the group is created, you can see its details.

Group details page

Navigate to “Identity & Security” → “Policies” and select compartment you created earlier.

Click “Create Policy”.

Policies page

Enter name, description, change “Show manual editor” to on and enter policy statements.

Allow group <group name> to read buckets in compartment <compartment name> where any {target.bucket.name='<bucket name>'}
Allow group <group name> to manage objects in compartment <compartment name> where any {target.bucket.name='<bucket name>'}

Note! Replace parts <group name>, <compartment name> and <bucket name> in statement with names you have used for those objects earlier.

Create policy popup

When the policy is created you see its details.

Policy details page

Navigate to “Identity & Security” → “Users” and click “Create User”.

Users page

Select “IAM  User”, enter name and description and then click “Create”.

Create user popup

Once the user is created, you can see its details.

Click “Add User to Group”

User details page

Select group you create earlier and click “Add”.

Add user to group popup

Once a user is added to a group, the group name will appear in the "Groups" list.

Click “Edit User Capabilities”.

User details page

Limit user capabilities only to “Auth Token” and click “Save Changes”.

Edit user capabilities popup

Next, we need to create an authentication token so that the database can move the exports to object storage bucket.

Navigate to user “Auth Tokens” and click “Generate Token”.

User auth tokens page

Give token a description and click “Generate Token”.

Generate token popup

Copy the token and place it somewhere safe as you can not view it later and then click “Close”.

Generated token popup

Once the token is generated, it will appear to "Auth Tokens" list.

User auth tokens page

Next login to your autonomous database e.g. using Database Actions as admin user.

Create new database user, grant needed privileges and enable database actions.

create user <username> identified by "<password>";

-- Grant privileges
grant connect, resource, dwrole, apex_administrator_role, datapump_cloud_exp, datapump_cloud_imp to <username>;
grant execute on dbms_cloud to <username>;

alter user <username> default role all;

-- Add quota for user
alter user <username> quota unlimited on data;

-- Enable database actions
begin
  ords.enable_schema(
    p_enabled              	=> true
    ,p_schema               => '<username>'
    ,p_url_mapping_type     => 'BASE_PATH'
  	,p_url_mapping_pattern	=> '<username>'
  );
end;
/

Note! Replace parts <username> and <password> with values you like to use.

Create database package to to new database user schema containing needed procedures to create exports . Package has procedure that exports all APEX workspaces, workspace files, applications and application parsing schemas and move those to object storage bucket. Package contains also procedure to import schema from object storage bucket.

You can review package code from Github Gist before creating it using below commands.

alter session set current_schema = <username>;

declare
	l_pkg_sql clob;
begin

-- Fetch package code from GitHub Gist.
  l_pkg_sql :=
		apex_web_service.make_rest_request(
    	p_url => 'https://gist.githubusercontent.com/jariolaine/60bfc2c08c0aa01658c78ea5cc918b46/raw/backup_api.sql'
      ,p_http_method => 'GET'
    )
	;

-- Install package.
	dbms_cloud_repo.install_sql(
  	content => l_pkg_sql
  );

end;
/

Note! Replace part <username> with database user you created earlier.

Login to your database using database user you created earlier. You can view user packages to verify earlier created package exists.

Create credential to access object storage.

begin

	dbms_cloud.create_credential(
		credential_name	=> 'OBJECT_STORAGE_CRED'
		,username       => '<iam user>'
		,password       => '<auth token>'
	);

end;
/

Note! Replace parts <iam user> and <auth token> with IAM user and token you created earlier.

Now you can test to create exports by running the procedure.

begin

	backup_api.export_apex_workspaces(
  	p_credential_name	=> 'OBJECT_STORAGE_CRED'
    ,p_region         => '<oci region identifier>'
    ,p_namespace      => '<object storage namespace>'
    ,p_bucket         => '<bucke name>'
	);

end;
/

Note! Replace parts <oci region identifier>, <object storage namespace> and <bucket name>  with your region identifier, namespace and bucket name.

View your object storage bucket objects. APEX exports have prefix APEX/<workspace name> and schema exports datapump.

If you have run database procedure multiple times you can see objects have versions.

Bucket objects

We can also create job and schedule exports. Login to database using user you created earlier and create job.

declare
	l_job_name varchar2(256);
begin

-- Set job name
	l_job_name := 'APEX_EXPORT_JOB';

-- Create scheduler job
	dbms_scheduler.create_job(
  	job_name              => l_job_name
    ,job_type             => 'STORED_PROCEDURE'
    ,job_action           => 'backup_api.export_apex_workspaces'
    ,number_of_arguments	=> 4
    ,start_date           => to_timestamp_tz( to_char( sysdate + 1, 'DD.MM.YYYY' ) || ' 01:00:00 +02:00' ,'DD.MM.YYYY HH24:MI:SS TZR' )
    ,repeat_interval      => 'FREQ=DAILY;'
    ,enabled              => false
    ,auto_drop            => false
    ,comments             => 'Daily APEX workspace export job'
	);

-- Set job arguments
	dbms_scheduler.set_job_argument_value(
		job_name            => l_job_name
		,argument_position  => 1
		,argument_value     => 'OBJECT_STORAGE_CRED'
	);

	dbms_scheduler.set_job_argument_value(
		job_name            => l_job_name
		,argument_position  => 2
		,argument_value     => '<oci region identifier>'
	);

	dbms_scheduler.set_job_argument_value(
		job_name            => l_job_name
		,argument_position  => 3
		,argument_value     => '<object storage namespace>'
	);

	dbms_scheduler.set_job_argument_value(
		job_name            => l_job_name
		,argument_position  => 4
		,argument_value     => '<bucket name>'
	);

-- Enable job
	dbms_scheduler.enable( l_job_name );

end;
/

Note! Replace parts <oci region identifier>, <object storage namespace> and <bucket name> with your region identifier, namespace and bucket name.

Now we have set daily exports. Later if you see object storage bucket objects, you can see lifecycle policy rules move and delete object versions after days you have defined. Please note that latest version of the objects are not moved or deleted.

If you wish import schema from object storage bucket you can run procedure:

begin

	backup_api.import_schema(
		p_credential_name	=> 'OBJECT_STORAGE_CRED'
		,p_region         => '<oci region identifier>'
		,p_namespace      => '<object storage namespace>'
		,p_bucket         => '<bucket name>'
		,p_schema         => '<schema name>'
		,p_new_schema     => '<new schema name>'
	);

end;
/

Note! Replace parts <oci region identifier>, <object storage namespace>, <bucket name>, <schema name> and <new schema name> with your region identifier, namespace, bucket name, schema name and new schema name.

If you omit parameter p_new_schema schema will not be remapped to new name.

Comments

  • Shahroz Ahmed 28 Mar 2023

    Hello Jari! I have been trying to reach out for you for quite a long time. I find your blogs very helpful. I have been struggling to find any helping material about the classic report in your application that is about food Orders. It would be great if you can guide me about that or provide any helping material how to achieve that.
    Thanks a lot.
    Huge Admirer of your work