Recently I was tasked with upgrading our systems from its initial version to two times jump so 971 to 10.5.I will chronicle my experiences here. Perhaps provide a structure.I also want to quell some myths about the whole process just to make sure you can do this logically and correctly.
First create a draft plan,in my draft plan I included this.I would need at least one VM that can house or Production 971 Binaries,a clone of the database and a clone of the EFS.If one had been storing items in perhaps Archive Server one should have a playing copy of that also. An OTAS based livelink upgrade can pose its challenges if not done correctly but it is no different than a EFS if you do it wrong you can write test data into your productive store 🙂 This involves talking to other people like DBA’s and Storage folks as a heavily used LL system can have tons and tons of data.You will most likely encounter the after effects of long time use,different administrators and their styles etc.So after a few failed attempts I understood why OT says a DB5 verification is very important.The Verification basically runs single threaded and will try to pinpoint anomalies in the database.Most likely it will pinpoint content loss such as bulk imports etc without actual version files et al .In my case the database had wrong pointers,duplicate dataid’s (wow),removed KUAf Id’s(somebody had fun with a oracle tool).So after going back and forth between OT and us I just thought like this and created a program(oscript) to check important structures.I list 3,your mileage will vary but essentially I grabbed all categories ,form templates and wf map structures.None of these checks happen in DB5 it just looks for existence.In hilarious cases I uncovered a category data structure mysteriously would turn up as a drawing PDF.One could argue that the users would have noticed, so this was an old category that nobody noticed.You don’t have to do this but each of those anomalies would prevent a upgrade and it is back and forth with OT support so I did it just to get some time advantage.
The second thing was I had to get all the optional modules and our system is no different from any other system a lot of optional modules some inserting schema some not.
The third thing was our systems were customized with Oscript to provide enhanced user experience as well as awesome home written modules that takes the tedium of long running things like category upgrades,permission pushes et al a very intelligent add on to the OI module that people can use generically to upload their data like that.So our team sat down and thought of what we could retire and what we should re-code,refactor so many of them were thrown out .Many useful ones we kept.We particularly liked the Distributed Agent framework,ours is very similar to that but if time comes and there is an appetite we would code it for that.
About the re-coding effort here’s the clunker,we were to use CSIDE .Naturally all of my team are hard core Builder aficionados and boy that was a journey.I probably think we were the first org to do something with CSIDE so naturally I gave them a lot of screenshots and dumps and OT dev was very receptive.In extremely tight time constraints we developed parallel modules in 97 1 and 10 and just just sucked them up for our SRC code maintaining thing called TFS. We also learned how to work parallel on oscript but not being a code churning house our code is not that complicated so if I was working on a module somebody else was working on another as well.
So the environment prepared was
- 971 VM with the same Oracle client and the same DB type of clone.
- I removed Prod EFS and Prod Admin server info from clone.
- We decided to rebuild search index as nobody had any clue whether it was good and it was eons before new search technology.
- A base install of LL was done and the working 971 binary copy from prod was put as an overlay.Now many starting out people do not know this technique where you can get a copy of your prod binary replete with patches et al into a playing system.BTW this is what you hear as the “Parallel Upgrade” approach .Its database connection was redefined .In very olden times when there was no differentiation between 32 and 62 bits and VM’s were not popular and cheap, it was possible to use the existing boxes and run the upgrade.This was the “Update an Instance” method.This is largely vestigial at this time and very error prone not to say you would have a real hard time going back to the working version 🙂
- All old 971 custom modules were removed so this database had only knowledge of OT software core and optional.
- Did my health check I said earlier like look at cats,form templates and wf maps,removed any that would hamper a upgrade.
At this point one can take this to a CS10.5 binary and start the upgrade.That is my next article.
BTW- I am not a novice when it comes to Upgrades I started using livelink software in 1999 and has worked in version 8.1.5 so I pretty much know the heartbeat of an upgrade and gets pleasure in challenges and how one methodically removes the challenges.Do not try an upgrade if you have not dry run the procedure without fail a couple of times.In most cases the upgrade can be done timely the planning can take weeks,if not months.