Using Transportable Tablespaces for Oracle Upgrades

Many of us are familiar with the idea of using the Oracle 10g cross-platform transportable tablespaces (XTTS) feature to migrate a database to another operating system.  Our founder Roger Schrag wrote a fantastic whitepaper on the subject which can still be found at:


along with many other useful whitepapers about performance tuning, software installation, remote database administration, and a range of topics.


Not to downplay that capability, but given that the big wave of migrations to Linux has somewhat played out, I have found the Transportable Tablespace (TTS) feature also very useful for doing Oracle upgrades in less time than might be experienced with a traditional export/import approach, particularly in cases where the database is a fatty.  It’s still hard to beat the in-place upgrade for limiting downtime during an upgrade, but what if you’re also moving to bigger and better hardware? 


For limiting downtime during the combination of upgrading Oracle and moving to a new piece of iron, I recommend transportable tablespaces.  Trouble is there seems to be precious little information available from Oracle on how to do this.  There is absolutely no mention of using transportable tablespaces in the Upgrade Guide.  But it’s definitely doable.


For starters, you must use Enterprise Edition on your source database to generate a set of transportable tablespaces.  They can be imported into a Standard Edition database, which is also not a bad method of moving from EE to SE, by the way.  There are other limitations, too, such as with XMLTypes and opaque types, such as RAW.  BINARY_FLOAT and BINARY_DOUBLE types can only be transported using Data Pump.  In any case check the manual before proceeding to be sure that your database is a good candidate for an upgrade via TTS.


As an aside, beginning with 10gR2 there is a feature known as Transportable Database (TDB) which can be handy when the source platform is on a different OS as the target platform, but the endian format must be the same and TDB cannot be used for upgrades.


While I haven’t tried every possible upgrade, Metalink claims that beginning with Oracle 8i there should be no problem with transporting a tablespace into a higher version.  There are some differences in capability between some versions, however.  For example, directly transporting materialized views and function-based indexes is not supported until 10g.


The upgrade using TTS should proceed like this:




  1. Create an empty database on the new host using the higher Oracle version.
  2. Capture the DDL for objects stored in the SYSTEM and SYSAUX tablespaces, such as PL/SQL, users, system privileges, etc.
  3. Perform a full backup on the source database.


Step #2 above is one that will require testing to make sure you don’t leave anything behind.  It’s not a huge task but one that shouldn’t be taken lightly, either.




  1. On the source database put the tablespaces to be transported into “read only” mode.  Remember that the SYSTEM and SYSAUX tablespaces cannot be transported.
  2. Run your Data Pump job to export the tablespace metadata.
  3. Stop or quiesce the source database.
  4. Copy or otherwise make available to the target database the data files from the transported tablespaces.
  5. Run you Data Pump job to import the tablespace metadata.
  6. Put the tablespaces into “read write” mode on the target database.
  7. Perform a full backup on the target database.

 That’s the long and short of it.  Step #4 above is the one that will likely make or break this method.  If you have to copy lots of big files across a slow connection, then you’re defeating the purpose of using this method, which is to limit downtime, although it may still be faster than export/import.  If, on the other hand, you can just swing you storage device over to a new database host, then Step #4 will take no time at all. 


  1. This comment is for the post a few days ago where you sugesgt making the network redundant instead of the databases. I am posting here so it gets noticed since I just read it and want your thoughts on my idea.I agree with your thinking. However I have an idea of how to balance it in my real world. I work for an organization with 25 remote sites all using the same DB via the WAN (all sites are “remote” from the DB and it is geographically wide). They have a front end system to schedule and process customers and when it goes down they can’t help customers and lose $. Even with redundant network and central application if anything happens (even at application layer) it would be good to have a way for them to keep working effectively (downtime procedures). My idea was instead of going to paper… to create a crude supplementary application for downtime procedures at the local sites that gets fed data from the production system each 15 minutes (does not slow down production) and lets the staff do downtime procedures with some system support (not just paper) and multi-user for things like a downtime scheduling system on their local segment if needed… it also has a way (likely via the IT department) to “resync” with the mothership in a recovery. This is all just a concept and thought but FYI. It would only get turned on if the “mothership” was down for a considerable time like we know it will be down for at least 30 minutes. What are your thoughts? Of course the less moving parts the better….but this may be a good model for us.


Please enter your comment!
Please enter your name here