Manual Database Creation in Oracle 10g On Linux, AIX, Sun Solarise, UNIX



Simple manual database creation in 10g


In Oracle we can create database through manually and GUI(DBCA) as well but in normal case most of the DBA's prefer to create database with manual only. I had seen some DBA's they are generating the scripts through DBCA and executing manually that is also good at least they come to know what oracle doing internally before creating the database and what type scripts it is executing.  This is very important to know who are working as a DBA's.



 Step : 1

Create the environment file or .bash_profile

]$ vi test.env   or   .bash_profile

export  ORACLE_SID = test
export ORACLE_HOME = /oraeng/app/oracle/product/10.2.0
export ORACLE_BASE  = /oraeng/app/oracle/product
export  PATH = $ORACLE_HOME/bin:PATH:.

:wq!

  Step :2

Change or create the parameter file in $ORACLE_HOME/dbs


$ cd  $ORACLE_HOME/dbs
dbs $   cp init.ora  inittest.ora
dbs $  vi  inittest.ora

db_name = test
instance_name = test
shared_pool_size = 64m
control_files =  /disk1/oradata/test/control.ctl
core_dump_dest = /disk1/oradata/test/cdump
user_dump_dest = /disk1/oradata/test/bdump
background_dump_dest = /disk1/oradata/test/bdump

undo_tablespace = undotbs
undo_retention = 900(15m default) 7200
undo_management = auto

compatibility 10.2.0

:wq!



  Step :3

Create necessary directories

$ mkdir –p /disk1/oradata/test
$ cd   /disk1/oradata/test
Test $ mkdir bdump cdump udump


  Step :4

Create Database script

]$ vi create.sql
 
create database test
datafile ‘/disk1/oradata/test/system.dbf’ size 170m autoextend on
sysaux
datafile ‘/disk1/oradata/test/sysaux.dbf’ size 100m
default tablespace userdata
datafile ‘/disk1/oradata/test/userdata.dbf’ size 50m
default temporary tablespace temp
tempfile ‘/disk1/oradata/test/temp.dbf’ size 50m
undo tablespace undotbs
datafile ‘/disk1/oradata/test/undo.dbf’ size 50m
logfile
group 1(‘/disk1/oradata/test/redo1a.log’) size 4m,
group 2(‘/disk1/oradata/test/redo1b.log’) size 4m;

:wq!

Step :5

create run.sql script for all data dictionary views  procedures. packages and default users profiles

]$ vi run.sql

@$ORACLE_HOME/rdbms/admin/catalog.sql
@$ORACLE_HOME/rdbms/admin/catproc.sql
Connect system/manager
@$ORACLE_HOME/sqlplus/admin/pupbld.sql

:wq!


$ conn / as sysdba

>  startup nomount

>  @create.sql

>  @run.sql


VIEWS :   V$DATABASE    V$INSTANCE

Schema Refresh in Oracle Using Datapump in Oracle Database


 Schema Refresh Using Datapump

Schema refresh is one of the routine task in DBA's life, for moving the objects from one schema to another schema or one database to another database normally we use expdp and impdp or normal export and import as well but here i am going to explain through data pump because compare to normal export and import data pump is faster . below post will explain you complete schema refresh solution. 

Oracle Data Pump is introduced from oracle 10g, Oracle Data Pump enables very high speed movement of data and metadata from one database to another database. Before performing the schema refresh check whether that user having EXP_FULL_DATABASE role or not.

Follow below steps in source and target database sides :

Source Database Side :


Step 1:

Check the Schema Exist or Not. Schema must be there in source database.

SQL> select username, account_status, created from dba_users where username='ORACLE';

Step 2:

Check the schema size

SQL> select owner, sum(bytes)/1024/1024  "MB"  from dba_segments where owner="ORACLE";

Step 3:

Take the count of schema objects, this will be use full after complete the refresh to compare both target and source schema objects.

SQL> select object_type, count(*) from dba_objetcts where owner='ORACLE' group by object_type;

Step 4:

Before going to take the export, first check mount point size where you're  going to store the export dumpfile, if mount point doesn't have sufficient space export job gets fail.

example :

]$  cd /d01/exports

exports]$ df  -h  .  (in Linux)  df  -g . (AIX and Sun Solaris)


Step 5 :

Create a datapump directory in database level, default datapump directory  location is " /app/oracle/product/10.2.0/rdbms/log/".

First create a directory in OS Level

]$ mkdir -p  /d01/exports

Next create in database levele

SQL>  create or replace directory DATAPUMP as '/d01/exports';

Step 6 :

Now take the export of schema.

]$ expdp \'/ AS SYSDBA\' directory=DATA_PUMP_DIR dumpfile= ORACLE_DBNAME.dmp logfile= ORACLE_DBNAME.log schemas=ORACLE


Step 7 :

If  you're going to import on same server means no need to move the dumpfile anywhere, incase if you're going to import this dump file in some other server, copy this dumpfile through SCP command

expdp]$ scp  -p  username@servername:\do1\targetlocation   (It will ask you target server password )


We have completed almost all steps in source database end, now we are moving to target database side,

Target Database :

Step 1 :

Check the mount pint size, it should be more then schema size.

]$  cd /d01/exports

exports]$ df  -h  .  (in Linux)  df  -g . (AIX and Sun Solaris)

Step 2 :

Create a directory same like how we have create for source database.

First create a directory in OS Level

]$ mkdir -p  /d01/exports

Next create in database levele

SQL>  create or replace directory DATAPUMP as '/d01/exports';


Step 3:

Take target schema backup before importing.(for safe side). with export command.

]$ expdp \'/ AS SYSDBA\' directory=DATAPUMP dumpfile= ORACLE_DBNAME.dmp logfile= ORACLE_DBNAME.log schemas=ORACLE


Step 4 :

SCRIPTS:Find complete list of  objects in the schema

Use this script when you do not need to drop the schema, But only the dependent objects in the schema. For ex :- to preserve DB-links, grants, privileges
Below scripts will generate all of the drop statements needed to drop almost all objects (sometimes 1-2 of them will have problems and you will have to manually drop those) from a specified schema (it prompts you for the schema).

EXECUTION
Login to the database where the schema to be dropped exists. Copy and paste the following script , double checking that you are in the correct database!:

--Initializations
set linesize 1000;
SET VERIFY OFF
col owner format a15;
col object_name format a30;
col object_type format a20;


--Select Non-system object owners
SELECT OWNER,OBJECT_TYPE,COUNT(*)
FROM SYS.DBA_OBJECTS
WHERE OWNER NOT IN ('SYS','SYSTEM','TSMSYS','ORACLE_OCM','WMSYS','PATMAN','OUTLN','PUBLIC','DBSNMP','XDB','APPQOSSYS','CTXSYS')
GROUP BY OWNER,OBJECT_TYPE
ORDER BY OWNER,OBJECT_TYPE;

--Select specific Owner, Object_Type & Count
SELECT OWNER,OBJECT_TYPE,COUNT(*)
FROM SYS.DBA_OBJECTS
WHERE OWNER = '&&USER'
GROUP BY OWNER,OBJECT_TYPE
ORDER BY OWNER,OBJECT_TYPE;

--Drops: Tables, Indexes, & Triggers
SELECT 'DROP TABLE '||OWNER||'.'||OBJECT_NAME||' CASCADE CONSTRAINTS PURGE;'
FROM SYS.DBA_OBJECTS
WHERE OWNER = '&&USER'
AND OBJECT_TYPE IN ('TABLE');

--Drops: Sequences, Views, Packages, Functions & Procedures, Synonyms
SELECT 'DROP '|| OBJECT_TYPE || ' ' ||OWNER||'.'||OBJECT_NAME||';'
FROM SYS.DBA_OBJECTS
WHERE OWNER = '&&USER'
AND OBJECT_TYPE IN ('PACKAGE','SEQUENCE','VIEW','FUNCTION','PROCEDURE','SYNONYM','TRIGGER');

--Drops: Types
SELECT 'DROP '|| OBJECT_TYPE || ' ' ||OWNER||'.'||OBJECT_NAME||' FORCE;'
FROM SYS.DBA_OBJECTS
WHERE OWNER = '&&USER'
AND OBJECT_TYPE IN ('TYPE');

--DO NOT DROP OR IMPORT DBLINKS, EXCLUDE=DATABASE_LINK
SELECT OWNER, OBJECT_TYPE, COUNT(*)
FROM SYS.DBA_OBJECTS
WHERE OWNER = '&&USER'
AND OBJECT_TYPE IN ('DATABASE LINK')
GROUP BY OWNER, OBJECT_TYPE;

OR

Drop the Schema at destination database:

SQL> Drop Schema <schema_name> cascade;
(Better drop only schema objects instead of dropping  schema)

Step 5 :

Import the dumpfile into target schema

impdp \'/ AS SYSDBA\' directory=DATAPUMP dumpfile=ORACLE_DATABASE.dmp logfile=ORACLE_DATABASE.log schemas=ORACLE

Step 6 :

Compare the Object Count with source database.
SQL> select object_type, count(*) from dba_objetcts where owner='ORACLE' group by object_type;

(If all the objects same and looks good go ahead and run utlrp.sql.

Step 7 :

Check invalid objects count

SQL> select owner, object_type, status, count(*)
from sys.dba_objects
where status = ‘INVALID’
group by owner, object_type, status
order by owner, object_type, status;


SQL> @?/rdbms/admin/utlrp.sql



Newer Posts