Course-Core-DBA

 1. Oracle Architecture

Create Linux Infrastructure on VMWare to Install Standalone Database - Watch Video | Read Content | Download —-->>>>>> OEL 7.9 Software | Download VMWare

Oracle Basic Linux Command for DBA’s

Oracle Database Software/binary installation + Listener creating by NETCA + Database Creation by DBCA 

Oracle Database Software/binary installation in Silent Mode & Create Database Manually on Storage mechanism.

  • 11gR2 - Watch Video | Install Birnay, Create Database, Listener & TNS Configuration 
  • 12cR2 -  Watch Video | Install Birnay, Create Database, Listener & TNS Configuration 
  • 19c  - Watch Video | Install Binaried  + Manually Database Creation 

One Document for All above Task

Oracle patching is a process to apply updates, fixes on Oracle Database Software to address bugs, security vulnerabilities, and improve performance.

OPatch is a java-based utility that allow to fix or apply patches to an Oracle Database.

Oracle release Critical Patch Updates (CPU) every quarter to fix bug (issue) in existing release. It can be call one-off patch.
PSU patch is consisting of CPU patch. This content of all previous CPU Patches and each PSU is limited from 25 to 100 new bug fixes. PSU are referenced by their 5th place in the Oracle version numbers which makes it easier to track (e.g. 10.2.0.3.1)

A patch set is a collection of patches, including bug fixes and enhancements, bundled together for easier application. 

A patch, on the other hand, is a smaller update that addresses a specific issue.

Interim patches apply for any existing bugs in software/database by using opatch utility. Opatch update files in ORACLE_HOME

Patchset apple to upgrade existing oracle release patchset by using Oracle Universal Installer. Patchset upgrade database objects. Example: Upgrade database to new version or patchset (10.2.0.3 to 10.2.0.4)

Global inventory is where your all oracle home on that server is registered. These products can be various oracle components like database, oracle application server, collaboration suite, soa suite, forms & reports or discoverer server

Inventory inside each Oracle Home is called as local Inventory or oracle_home Inventory. This Inventory holds information to that oracle_home only.

A patch set is a collection of patches, including bug fixes and enhancements, bundled together for easier application. 

A patch, on the other hand, is a smaller update that addresses a specific issue.

Interim patches apply for any existing bugs in software/database by using opatch utility. Opatch update files in ORACLE_HOME

Patchset apple to upgrade existing oracle release patchset by using Oracle Universal Installer. Patchset upgrade database objects. Example: Upgrade database to new version or patchset (10.2.0.3 to 10.2.0.4)

Global inventory is where your all oracle home on that server is registered. These products can be various oracle components like database, oracle application server, collaboration suite, soa suite, forms & reports or discoverer server

Inventory inside each Oracle Home is called as local Inventory or oracle_home Inventory. This Inventory holds information to that oracle_home only.

Step 1: Set ORACLE_HOME
Step 2: go to /OPatch directory
Step 3: execute:
$ ./opatch version

$ORACLE_HOME/OPatch/opatch apply

Suppose, if you have applied wrong patch or patch did not fix the issue, we can rollback the patch.

Example:

$ORACLE_HOME/OPatch/opatch rollback –id

Check inventory.xml under ContentsXML in oraInventory

cd $ORACLE_HOME/oui/bin
./runInstaller -attachHome ORACLE_HOME=”/export/home/oracle/oracle/db_1/10.2.0″ ORACLE_HOME_NAME=”Oracle10g_home1″

We can recreate global Inventory on machine using Universal Installer and attach already Installed oracle home by option -attachHome

$./runInstaller -silent -attachHome -invPtrLoc $location_to_oraInst.loc ORACLE_HOME=”Oracle_Home_Location” ORACLE_HOME_NAME=”Oracle_Home_Name”
CLUSTER_NODES=”{}”

The `opatch prereq CheckConflictAgainstOHWithDetail` command can be used to check whether a patch is applicable and if there are any conflicts or prerequisites that need to be addressed.

The patch level of an Oracle database can be determined by querying the DBA_REGISTRY view or by using the opatch utility with the command opatch lsinventory.

Performing patches during scheduled maintenance windows.

Using Oracle Real Application Clusters (RAC) for high availability.

Employing Oracle Data Guard for rolling upgrades.

OPatch automatically detects and reports the conflicts before the start patching. 

We can use the -force option to override this failure. If we specify -force, the installer first rolls back any conflicting patches and then proceeds with the installation of the desired interim patch.

We may experience a bug conflict and might want to remove the conflicting patch. 

During patch installation, OPatch saves copies of all the files that were replaced by the new patch before the new versions of these files are loaded, and stores them in $ORACLE_HOME/.patch_storage. These saved files are called rollback files and are key to making patch rollback possible. When you roll back a patch, these rollback files are restored to the system. If you have gained a complete understanding of the patch rollback process, you should only override the default behavior by using the -force flag. To roll back a patch, execute the following command:

$ OPatch/opatch rollback -id <Patch_ID>

$ORACLE_HOME/cfgtoollogs/opatch

$ORACLE_HOME/cfgtoollogs/opatch

Oracle Inventory is the place where all the information about Oracle Home is stored. OPatch require reading this information to apply patches.

The location of inventory is defined in oraInst.loc file. The path for this file is provided while installing Oracle Home.

If we dont supply any path, the file will be present at central location oraInst.loc in /etc (on Linux) or /var/opt/oracle (solaris)

$ cat oraInst.loc

We can create local inventory at any location while installing Oracle Home. We need to use -invPtrLoc variable along with runInstaller command and give the location of oraInst.loc. Otherwise the default path assumed for oraInst.loc is /etc/oraInst.loc.

./runInstaller -invPtrLoc /slot/ems2029/appmgr/apps/tech_st/10.1.2/oraInst.loc

Global inventory is where your all oracle home on that server is registered. These products can be various oracle components like database, oracle application server, collaboration suite, soa suite, forms & reports or discoverer server

Inventory inside each Oracle Home is called as local Inventory or oracle_home Inventory. This Inventory holds information to that oracle_home only.

Interim patches are individual patches provided to fix specific issues, while recommended patches are collections of tested and highly recommended patches for a particular Oracle version.

Before applying patches, it's essential to backup the database, review the patch documentation, test the patch in a non-production environment, and notify stakeholders about the planned downtime.

  • Reviewing patch documentation.
  • Verifying prerequisites.
  • Backing up the database.
  • Applying the patch using tools like opatch.
  • Post-patch tasks such as running scripts or applying additional configurations.

Proactive patching involves applying patches as part of regular maintenance to prevent issues. Reactive patching is done in response to specific problems or vulnerabilities.

  • Reviewing patch documentation thoroughly.
  • Testing the patch in a non-production environment.
  • Taking a full backup of the database.
  • Noting any potential impact on applications and users.

External Link:

Resolve patch Conflicts.

Hand's- On Activity:

Apply Patch on 12cR2 database

 

  • What is Export / Import 
    How it works
    How to Set UP
  • How to Export/Import Table
  • How to Export/Import Schema
  • How to Export/import Database
  • How to monitor the current jobs
  • What to Do If a Data Pump Job Crashes
  • Can we restart failed or Crashed Data Pump Jobs
  • How to run exp/job using parfile
  • Data Pump Export Progress %

Data Pump / Export-Import - Concepts

Document: Database Upgrade from 12c to 19c using Datapump

 

  • What is Cloning?
  • Why is Export/Import not suitable for Cloning?
  • Method behind Cloning
  • Reason for Cloning?
  • Difference between cloning and refreshing?

Cloning Method /Case

  • Cloning Using Cold Backup 
  • Cloning Using RMAN Backup 
    • If datafile Structure are same
    • If datafile structure are different
  • Cloning using Duplicate active Database method.
    • Active database duplication
    • Backup-based duplication

Click Here to View Document

FAQ - Interview Question

Cloning is a process to create an exact copy of an oracle database without using export/import. We mostly use cloning from production to test or development environment for testing purpose.

In the cloning process, we will copy the datafiles from our production database and use it to create your test database. So your test database will be a mirror image of the production in all respects.

 

We can create a clone instance/database on our production server as well as a different server but as per my recommendation we should not be done on our production server.

The Export/Import process is very bulky. If you want to use export / import you have to do the data refresh frequently and import takes a long time if your database is of good size. You can, of course, resort to exporting and importing tablespaces if they are self-contained and you are running version 8i or greater, enterprise edition. A faster alternative is to clone the database.

There are several methods for cloning like: RMAN (Recovery Manager), Clone from Clod Backup or User managed hot backup. We can also user active duplicate database method.

The method used here is actually a backup/recovery of the production database on the test server. We just recreate the controlfile on the test machine using a new database name and do a recovery. The datafiles from the production database can be from a hot backup, a cold backup or an RMAN backup. If the database was open during backup (hot or RMAN), you will need all the archivelogs since the time the backup started to recover the database to its present state (that is, do a complete recovery). Throughout the article, I will refer to environment variables by their UNIX notation (example $ORACLE_HOME). Replace this with the Windows equivalent if you are using NT/2000 (example $ORACLE_HOME becomes %ORACLE_HOME%).

RMAN cloning involves using the Oracle Recovery Manager utility to create a duplicate copy of a database. The steps include:

  1. Connect to the target and auxiliary databases.
  2. Allocate channels for the duplicate database.
  3. Set the auxiliary instance parameters.
  4. Run the RMAN duplicate command.
  5. Open the duplicate database.

 

Cold cloning involves shutting down the source database before creating the clone, while hot cloning can be done while the source database is still running.

Cold cloning ensures a consistent state but requires downtime, while hot cloning allows for minimal downtime but may have data consistency considerations.

Using Data Pump for cloning involves the following steps:

  1. Export the source database using `expdp`.
  2. Transfer the export dump file to the target server.
  3. Create a new empty target database.
  4. Import the dump file into the target database using `impdp`.
  5. Perform necessary post-import tasks, such as recompiling invalid objects.

 

When cloning a database with sensitive data, it's essential to ensure proper security measures. This includes encrypting the data during the cloning process, securing the transfer of data between source and target systems, and implementing access controls on the cloned database.

Handling database links and dependencies involves updating connection strings and configurations in the cloned database to reflect the new environment. This may include modifying database links, updating listener configurations, and ensuring that any external dependencies are appropriately adjusted.

A pluggable database (PDB) is a feature introduced in Oracle Multitenant architecture, allowing multiple databases to share a single container database (CDB). During cloning, PDBs can be cloned independently, providing a more efficient way to create copies of specific databases within a container.

  1. About ASM,Functionality and Advantage
  2. ASM Instance / ASM Instance Archicture / Managing ASM Instance
  3. Managing ASM Disk Group
  4. Managing ASM Directories, Aliases, Files, Metadata and Template
  5. Useful Query for ASM Enable Database
  6. Move Non-ASM Database to ASM
  7. Access ASM Disk files through FTP and HTTP

ASM Internals:

  1. ASM Re-Balancing
  2. ASM Redundancy / High Availability
  3. ASM vs Hardware RAID
  4. ASM Storage/Disk Group
  5. ASM exists to manage file storage for the RDBMS
  6. Fail Groups mirroring
  7. ASM Performance
  8. Other’s

Hand's-On Lab

Installing Oracle Grid Infrastructure for ASM Standalone Database

FAQ - Interview Questions

ASM is the most powerful feature. As a DBA we don’t worry about database file (Data file, control files, redo log files, archive logs RMAN backup set and so on) management task because ASM completely automates file management tasks. Only we deal with a few disk groups, instead of directly handling data files. (Example: When we use ASM enable databases. We don’t need to refer to table space by filename. We just use simple Disk Group name)

Three major components are: - ASM Instance, Disk Group and ASM Files.

In the ASM Enable database, we first start the ASM instance (This is not a full database instance; just the memory structures and as such is very small and lightweight), ASM instances manage the Disk Group and help the database to access the ASM files.

  • Big advantage is that ASM gives us power to bypass using a third- party LVM, mirroring and stripping our disk directly because ASM works as LVM and it also handles striping and mirroring functions.
  • As a DBA no need to perform relocation of data because ASM prevents disk fragmentation.
  • ASM performs mirroring and stripping. Mirroring is applied on file basic rather than disk basic. ASM automatically balances I/O.

  • ASM Manage group of Disk, Called disk groups.
  • It manages disk redundancy within a disk group.
  • Provides near-optimal I/O balancing without any manual tuning, Support largs file

  • INSTANCE_TYPE = ASM,
  • ASM_POWER_LIMIT = 11,
  • ASM_DISKSTRING = '/dev/rdsk/*s2', '/dev/rdsk/c1*'
  • ASM_DISKGROUPS = DG_DATA, DG_FRA

ASM Instance maintains the ASM file metadata. Database can use it to access the files directly. ASM does not mount the database files. ASM also does not have a data dictionary. ASM instance only manage the disk group, protect disk group and communication file metadata to database instance using ASM files.

ASM has its own background process like SMON, PMON and LGWR processes.

On the ASM Instance side, the two new background processes introduce ASM rebalance Master (RBAL) and ASM rebalance (ARBn) .

  • ASM rebalance Master (RBAL) is in charge of co-coordinating disk activity.
  • ASM rebalance (ARBn) performs the actual rebalancing work like moving data extents around.

On the Database Instance side, two new background processes introduce RBAL and ASMB.

  • RBAL process performs global open of the disks in the ASM disk groups.
  • ASMB background process connects as a foreground process into your ASM instance.

  • Rebalancing" is the process of moving file extents onto disks.
  • Purpose of rebalancing is evenly distributing the I/O load of the diskgroup.
  • In the RAC Environment, Rebalancing for a disk group is done within a single ASM instance only and cannot be distributed across multiple cluster node to speed it up.
  • ASM will automatically rebalance data on disks when disks are added or removed by using ASM_POWER_LIMIT initialization parameter.

ASM_POWER_LIMIT is a dynamic parameter, which will be useful for rebalancing the data across disks. Value can be 1(lowest) to 11 (highest).

External redundancy, Normal redundancy, High redundancy.

Fine-grained striping, Coarse-grained striping

Fine Striping:- Fine striping writes 128 KB data to each ASM Disk in the diskgroup in a round robin fashion, 128 KB goes to the first disk, then the next 128 KB, goes to the next disk, etc. According to manual, The fine-grained stripe size always equals 128 KB in any configuration; this provides lower I/O latency for small I/O operations.” Small I/O operations sure sounds like a good candidate for redo logs, control files etc.

Coarse Striping:-With coarse grained striping ASM writes data to each disk in the same round robin fashion, but writes chunks in the size of the ASM instance’s allocation unit (AU) size, default is 1MB.

  • Every ASM disk is divided into allocation units (AU).
  • An AU is the fundamental unit of allocation within a disk group.
  • A file extent consists of one or more AU.
  • An ASM file consists of one or more file extents.
  • CREATE DISKGROUP disk_group_2 EXTERNAL REDUNDANCY DISK '/dev/sde1' ATRRIBUTE 'au_size' = '32M';

Process flow for ASM rebalancing:

  • On the ASM instance, a disk is added or dropped from a  diskgroup.
  • This invokes the RBAL process to create the rebalance plan and then begin coordination of the redistribution.
  • RBAL will calculate estimation time and work required to perform the task and then message the ARBx processes to actually handle the request.
    • The number of ARBx processes invoked is directly determined by the asm_power_limit.
    • The Continuing Operations Directory (metadata) will be updated to reflect a rebalance activity.
    • Each extent to be relocated is assigned to an ARBx process.
  • ARBx performs rebalance on these extents. Each extent is locked, relocated, and unlocked. This is shown as Operation REBAL in V$ASM_OPERATION

RMAN Backup Script for Non-RAC Database

Sample Shell Scripts for Scheduling RMAN Backup 

  1. Introduction
  2. User Managed Backup Terminology 
  3. RMAN Managed Backup Terminology
  4. Different Type of backup
  5. Type of Recovery
  6. RMAN: RESTORE Concept
  7. RMAN with a Recovery Catalog
  8. How to Configure RMAN
  9. Complete Steps for Using RMAN through Catalog
  10. Recovery Case Study (Without using RMAN)
  • Recovery from missing or corrupted redo log group
         Case 3: If missing Redo log is CURRENT
  • Recovery form missing or corrupted control file

          Case 1: A multiple copy of the control file is available

  • Recovery form missing or corrupted Data File
          Case 1: Recover from corrupted or missing data file
          This scenario deal with a situation where data file has gone missing or corrupted at the time of open  the database. 
          Case 2: Recover from corrupted or missing data file (When database is Open)
         This scenario deal with a situation where data file has gone missing or corrupted and databases already open.
 
11.    Recover Case Study (With using RMAN) 
  • Data File Recovery
          Case 1: Recover from corrupted or missing data file
           This scenario deal with a situation where data file has gone missing or corrupted at the time of open the database. 
          This scenario deal with a situation where data file has gone missing or corrupted and databases already open.
  • Control File recovery
         Case 2: All Control file lost 
  • Redo Log File Recovery 

             Case 1: A multiplexed copy of the missing log is available 
             Case 2: All members of a log group lost.

 
14.    Time Based, Change Based or SCN Based Recovery
15.    Recover UNDO Data File                                                                                                              
         Case 1: If undo data file lost after cleanly shutdown 
         Case 2: The Database was not cleanly shut down 
Lorem ipsum dolor sit amet, consectetur adipisicing elit. Optio, neque qui velit. Magni dolorum quidem ipsam eligendi, totam, facilis laudantium cum accusamus ullam voluptatibus commodi numquam, error, est. Ea, consequatur.
Lorem ipsum dolor sit amet, consectetur adipisicing elit. Optio, neque qui velit. Magni dolorum quidem ipsam eligendi, totam, facilis laudantium cum accusamus ullam voluptatibus commodi numquam, error, est. Ea, consequatur.

This course is designed to empower database administrators, developers, and IT professionals with the skills to optimize Oracle databases for peak performance. Dive into the intricacies of Oracle database architecture, learn hands-on strategies, and gain expertise in SQL tuning, memory management, I/O optimization, and more.

Who Should Attend

- Database Administrators
- Database Developers
- IT Professionals in Oracle database management

Prerequisites

Basic understanding of Oracle database concepts and SQL.

Join me on this journey to elevate your Oracle performance tuning skills and make a significant impact on database efficiency and reliability within your organization! Let's optimize together!

This content is for paid members only.

Password Reset
Please enter your e-mail address. You will receive a new password via e-mail.