Many times Administrators and users do things stupidly and want to cover their tracks quickly and without creating unwanted attention.Most people working with livelink know that the application is responsible for the data integrity and referential integrity.So to do any proper reversal you would need to script something back.But again here you are at a disadvantage.If you resort to standard OT support the maximum you are going to get is “you have an error that is not easily fixable and we would recommend you to fix it with our professional services team”.To a org or a small company they do not have that many resources or money to do that,hence the administrator or a power user turned developer would try to see if something can be done.Lo and behold the KB is consulted and most members who would like to show off will post in their comments.Now that goes for me also because there isn’t a day that goes by when I cannot respond to something.However I am very judicious when I tamper with the livelink database.That is really because I understand oscript and I see the many myriad transactions that happen when it is working,doing upgrades and such.But in many organizations people are well versed in Live Reports and many now have the Web Reports product which is an excellent alternative to certain kinds of programming or utility programs .This is where the start of the problems happen.Say you accepted a bad advice like fixing a column or data point in a table that the “learned KB people” advised,you would not probably notice this way into months or years at part of a upgrade so what you did in earnest will land you in irrecoverable or costly difficulties.So RESIST the urge to change values in OT given tables.If it is your table and you are responsible for it do so by all means.There is no problem with you playing in a VM and understanding it but just don’t think that touching llattrblobdata or llattrdata is all it requires to do category updates 🙂
That doesn’t absolve OT as a company,it is expected to do its fair share.Somethings might be
Perhaps with support paid the customer gets 10 free incident fixes. OT absorbs the development cost and passes the solution to a would be admin/dev in the organization.
A list of compiled easy to use reports and utilities,perhaps web reports, perhaps a compiled java app that uses REST or WSAPI or even a utility oscript module.
Some re-assurance from support to quell the panic and perhaps advise the customer to do another thing like tell them a deleted user/users necessarily need not be considered a panic situation.
I had a short stint working with Documentum and I did not know if the utilities it delivered were high priced like what OT does but it had a language called DQL (Documentum Query Language).So any guys who is knowledgeable on the schema could run reports and turn them into DQL commands which would basically honor data integrity and referential integrity for them.This is in a way WR does but it takes enormous patience and complexity to work with it which again would mean should I have developers supporting my application.I would advise WR people in OT to provide clear crisp examples 1 to 5 liners that should work with any livelink any schema,kind of a pre built library more like the canned Live Reports
I tested a user deletion today and tried to see what all tables would be affected so for KB users if you want to see it is here.hopefully I will find a cheap hosting provider to put my content and not in KB as I have been burned in my earlier attempts.
BTW this is kind of a cookery show kind where they put the main course in the oven and take another finished one .I did not accomplish any of this in one day it took a lot of homework and trial runs.I read the case studies by the great Rob Coutts many times over.
For sanity we decided to use a new box for CS10.5
Created a CS10.5 system added all the modules that we needed and connected to a dummy database and dummy EFS. Just for kicks we added some that did not exist in 971 like classifications and recman(forward thinking).Rudimentary checks to ascertain base functionality.Note all hot fixes ,patches relevant were also put.
Cloned this server to 3 others for Front Ends and Agents.Just architectural things as after the LL installer creates the services just copy the OTHOME over ,give it distinct names etc.Sometimes if your copies are done with IIS and or any of livelink services running sometime corrupt dll copies end up so make sure you follow proper protocol.In most cases you do not need to run the optional modules installer on your 2nd to n CS servers.Most livelink needed dll’s oscript code will try to push it to windows systems folders on startup every time.
Saved this and copied it as backups.
It is worth verifying the DB type requirements for 10.5 for e,g your 971 DB might have been 11.2.0,2 but Cs10.5 need 18.104.22.168.these are database chores that your DBA should know. Always give lots of oracle memory like SGA etc .OT has a tech article on how it is to be done.
Connected the binary to the prepped up 971 database.one of the first screens was the 971 box for Admin server,changed it to the new box .
Wanted re-starts it is a good sign.One thing I do is since I know the DB Upgrade is done by a single thread I made the threads 8 on it.If you have lots of threads they indirectly add to the Oracle load.I also do my upgrades with debug=2 and wantlogs=true.It is almost impossible to live without BareTail or a good ASCII Editor.
The other threads on it will just produce a warning message that the “upgrade is in progress”.
Once the upgrade starts you should see the thread<nn>.out file issue oscript commands and its corresponding connect<nn>.out doing DB work.Your heart will rejoice if you have them on baretail 🙂 In my case a core upgrade will move my 971 schema(6.0.8) to CS10.5 (6.2.58).If it does that it is a successful upgrade. Times will vary depending on data and content and horsepower of your DB.Do not try to run arithmetic on the numbers does not mean continuous although looking at the db upgrade log one can see the steps.The places it breaks clues OT and even if you know Oscript you can chase a lot f them.In my case it was let’s say a very smooth upgrade.
Since we had optional modules after the core upgrade all optional module schemas were introduced or upgraded.You should see it in bold letters in the pre upgrade page. “Your Content Server schema will Upgrade from <nn> to <nn>”.Classifications module will be introduced,Recman would be introduced(New things), ADN will upgrade from <nn> to <nn> .BTW I had to downgrade A D N to a lower one because the latest and greatest won’t cleanly co operate with the upgrade.You will not notice the difficulty if your database is new.
The reason why OT says to upgrade OPT modules after core upgrade is it is easier for them to pinpoint the failure.
The way livelink code works every time you re-start is it will read Opentext.ini and one of the first modules that loads is the DBWIZAPI .It will first try to ascertain whether core schema is what the binary says its is otherwise it will force a DB Upgrade.Before releasing the software to listen to request on 2099(suggested default),it will enumerate the module section ini to the schema section INI. So if one box had a schema aware module such as form and its INI said 2.0.4 and this box’s form module said 2.0.3 it will trigger the familiar error “You have blah blah in database but blah blah module is lower/higher”.That is the whole reason why experienced people and nowadays OT says get one server(Anchor) done correctly and clone that on other boxes.So whether you install opt modules after or before livelink code always checks this every time you re-start.
The Admin Service left running on any box is no problem.It contains java code to talk to the search server.Also memcached is dependent on Admin Service.I don’t really know what cluster agent does but it is OT’s answer to smart patching.Once you resister a Admin server to the database that is when the box can be used for certain things like augmenting search.
We had to create a search as advised by OT so that items would come out indexed faster.I am always amazed at the search code but unfortunately one cannot see it completely as Oscript just talks to the java code.Perhaps if I had time and code was de compilable( I seriously doubt it,I have a feeling java is talking to C++ compiled code internally,how else would it scale so well).To have people always complain about search ….
I installed all our custom modules created clones of this and called it many different roles and we were pleasantly done
The rest was mainly releasing it to customers to test have their suggestions and voila…
Recently I was tasked with upgrading our systems from its initial version to two times jump so 971 to 10.5.I will chronicle my experiences here. Perhaps provide a structure.I also want to quell some myths about the whole process just to make sure you can do this logically and correctly.
First create a draft plan,in my draft plan I included this.I would need at least one VM that can house or Production 971 Binaries,a clone of the database and a clone of the EFS.If one had been storing items in perhaps Archive Server one should have a playing copy of that also. An OTAS based livelink upgrade can pose its challenges if not done correctly but it is no different than a EFS if you do it wrong you can write test data into your productive store 🙂 This involves talking to other people like DBA’s and Storage folks as a heavily used LL system can have tons and tons of data.You will most likely encounter the after effects of long time use,different administrators and their styles etc.So after a few failed attempts I understood why OT says a DB5 verification is very important.The Verification basically runs single threaded and will try to pinpoint anomalies in the database.Most likely it will pinpoint content loss such as bulk imports etc without actual version files et al .In my case the database had wrong pointers,duplicate dataid’s (wow),removed KUAf Id’s(somebody had fun with a oracle tool).So after going back and forth between OT and us I just thought like this and created a program(oscript) to check important structures.I list 3,your mileage will vary but essentially I grabbed all categories ,form templates and wf map structures.None of these checks happen in DB5 it just looks for existence.In hilarious cases I uncovered a category data structure mysteriously would turn up as a drawing PDF.One could argue that the users would have noticed, so this was an old category that nobody noticed.You don’t have to do this but each of those anomalies would prevent a upgrade and it is back and forth with OT support so I did it just to get some time advantage.
The second thing was I had to get all the optional modules and our system is no different from any other system a lot of optional modules some inserting schema some not.
The third thing was our systems were customized with Oscript to provide enhanced user experience as well as awesome home written modules that takes the tedium of long running things like category upgrades,permission pushes et al a very intelligent add on to the OI module that people can use generically to upload their data like that.So our team sat down and thought of what we could retire and what we should re-code,refactor so many of them were thrown out .Many useful ones we kept.We particularly liked the Distributed Agent framework,ours is very similar to that but if time comes and there is an appetite we would code it for that.
About the re-coding effort here’s the clunker,we were to use CSIDE .Naturally all of my team are hard core Builder aficionados and boy that was a journey.I probably think we were the first org to do something with CSIDE so naturally I gave them a lot of screenshots and dumps and OT dev was very receptive.In extremely tight time constraints we developed parallel modules in 97 1 and 10 and just just sucked them up for our SRC code maintaining thing called TFS. We also learned how to work parallel on oscript but not being a code churning house our code is not that complicated so if I was working on a module somebody else was working on another as well.
So the environment prepared was
971 VM with the same Oracle client and the same DB type of clone.
I removed Prod EFS and Prod Admin server info from clone.
We decided to rebuild search index as nobody had any clue whether it was good and it was eons before new search technology.
A base install of LL was done and the working 971 binary copy from prod was put as an overlay.Now many starting out people do not know this technique where you can get a copy of your prod binary replete with patches et al into a playing system.BTW this is what you hear as the “Parallel Upgrade” approach .Its database connection was redefined .In very olden times when there was no differentiation between 32 and 62 bits and VM’s were not popular and cheap, it was possible to use the existing boxes and run the upgrade.This was the “Update an Instance” method.This is largely vestigial at this time and very error prone not to say you would have a real hard time going back to the working version 🙂
All old 971 custom modules were removed so this database had only knowledge of OT software core and optional.
Did my health check I said earlier like look at cats,form templates and wf maps,removed any that would hamper a upgrade.
At this point one can take this to a CS10.5 binary and start the upgrade.That is my next article.
BTW- I am not a novice when it comes to Upgrades I started using livelink software in 1999 and has worked in version 8.1.5 so I pretty much know the heartbeat of an upgrade and gets pleasure in challenges and how one methodically removes the challenges.Do not try an upgrade if you have not dry run the procedure without fail a couple of times.In most cases the upgrade can be done timely the planning can take weeks,if not months.
Forms as the name implies is a object where you can capture input and store them in livelink . The exact methods are only going to serve as what you design your forms to be.I wrote this because a user was asking about this in the KB
Main Attributes: Forms can be made to work only by installing the “Forms Module”,which in turn will give you a Form Template sub type(230). This serves as the straw man for your data requirements.
Installation of WEB FORMS will give you HTML form capability like wise if you install PDF Forms it will give you that capability.
Almost all organizations will need data in forms to be reported so use SQL as storage.If you keep it at default it could go as version files which is not parsable by a SQL tool nor will it have header info(you won’t know which field is which). Users using the Transmittal product usually get bitten by the fact that most of the work in transmittals revolve around forms and if you don’t make it SQL storage you will never be able to report on it.Actually on my first project installing transmittals I bungled and then I looked under the covers to understand it.Even now there is no mention in any guide saying change forms to SQL storage.A recent help to a
You can very greatly experience a users’ livelink workflow experience by chnaging the process to use FORMS while maintaining the business logic of the workflow
Advanced Topics in Forms/Workflows
This I put here and only if you know Oscript is this of any use
Livelink comes up with a extremely powerful Workflow Engine .At the minimum if you have only installed Livelink OOB you get a decent workflow features.This involves a “business user” to design a logical map that may involve “players” or “roles”.The only criterion is the process needs to be repeatable or it can be plotted in visio or such like.Note Livelink WF came before the wf consortium came up with open standards but it will have almost everything one sees in the standard.
Some key terms-
A WF Map.This is a dtree object identified by a dataid
WF Manager-By default the person who created the original map.Best practices dictate that you assign a proper livelink group as the ‘Master Manager‘.Not to be confused as the livelink team’s manager or a organizational manager.In theory this manager can re-assign steps and repaint the map.One of the few places where the sysadmin profile or the ‘admin’ user won’t cut it so due diligence thinking that at some point a higher up user may be asked to assist.Please do not run maps with <Initiator /> as the Master Manager.It is quite possible these users know only to click and when something breaks nobody can intervene.You can make them do only the “See Details” which is more than enough in many cases.
WF Instance- The process logic that is set in motion when a WF Map is set in motion a.k.a initiated .This is easily identifiable by a clickable link in the WF managers assignments and it will show in GREENthe step and useful info.
Steps-The series of steps that can be assigned to processes and users.This is what apparently the business user can be trained to paint..there are a myriad of steps that OT puts on the palette.
Role Based or Map Based-If you create a map OOB then livelink gives you a map where the forward steps can be done by business programming,like if WF attribute is green then my user is this group.If you change the map to “ROLE” based before initiation the Initiator will have to fill all the people in the workflow.People who use the Transmittals product of livelink can see this in action very clearly.
Attachments Volume- By default the attachments package is permissioned Full Control to“Public Access”very bad idea as when a casual user does search he will see unwanted or secretive results..Always put a good permission bit there.If you do not give participants Add Items up to Delete in that volume you will get crude messages from livelink . Note that the Truth Table implementation is observed by livelink on any kind of Object Creation so to get around problems in workflow OT assigns default full control to PA. Just observe what OT gives and change that to a good group and add all the people in the workflow to that group.
WF Attributes– Helper attributes modelled after category metadata .It allows the business user to route the map.
Forms-Optional Package see NEXT post.
Loop Back– Be extremely careful about this as you can create a infinite wf instance.
Item Reference– You can point to existing object subtypes in livelink like folders /documents etc so in conjunction with Item Handlers you can perform “auto magic”
It is quite possible that a good business user or a livelink user can be trained over a day to understand the process flow/swim lanes.I usually design my maps on a paper.I have a cheat sheet of sorts I maintain and many of the things in this article is based on that. Always check “Verify Map Definition” to understand any problems this may have.
The Item Handler is a predictive step modeled by some placeholder logic( Design Book 3) although which it appear useful it could look frustrating as very little process automation can be done just by it.The workarounds or “auto magic” is usually a human at a prior step.If you use it with XML WF Extensions a lot of “auto magic” can be done with it.
The “XML WF EXtensions” is a optional module that uses the capability of livelink objects to have a XML representation(everybody probably has heard of XML Export/XML Import)
the XML WF Ext uses XML representations of livelink objects and manipulation of those objects by a SAX(or DOM not sure I know LL has both in it ) parser. Usually people get bitten by load balancers and file system permissions.
Like wise -XML Work Flow Interchange.Can send /call Web Services of other systems.
ESign – A specialized workflow that can do electronic signatures prevalent in 21 CFR11 operations.
Perfectly suited for livelink organizations who will not invest in Oscript coding /scared by OT sales/marketing in not writing Oscript understanding of this remarkable product.It was very expensive when I started livelink programming so it is a personal bias as well since I like to look at the map and many times I can get a 1-1 representation to the process.
Privileges apply to you as the “user” versus permissions are what the Object allows a “user” to do.There are differences.For e.g for installing something in windows you need to be part of the “administrators” group similarly to install something you need to be in the “system administrators” group. A “administrator group user” has almost very high privileges to do things in a OS similarly they have a high degree of authority in livelink as well.So when a user is created we set at the minimum these privileges Login Enabled and Public Access So when a objects is accessed at run time livelink’s algorithm is does user have login,does user have PA and does Object have PA and what like S,or SC so it is a given that the user will be able to click the link. In very simple terms use PA effectively to address contents that is useful for the whole environment almost like using livelink as a glorified web server.
Now other things are self explanatory has User Creation,System administration and others leading to making Group Leaders etc.In short if you spend some time understanding the architecture you will be amazed and the simple but very effective thought being given into its devlopment.
The Add New Item Truth Table
The livelink system is replete with several subtypes some of the are very useful and harmless like folders,documents etc so nobody changes anything from default. However you would be very concerned if somebody go unauthorized access to Live Reports,workflows,form templates etc,so what I do is I will look at the personal workspace of a default user and see all the “New Items” being shown there.Then I evaluate what should a untrained user be allowed to create in my system.For all the complicated ones I will create a manual group called “r-Form Template Creators” and add ‘Admin’ to that.Now if I have trust to give a group or a user I would expand my group.So when these new person goes and he has permissionsto “Add Items,Reserve , Delete” this user will see “Form Template” there. However h(s)e moves to another folder where he has only “S,SC” he does not see it there.Note different object subtypes do different things.I am not keen to explain everything like Tasks,Discussions,Projects ec they are for you to explore and set right.Before releasing a system to production give a simple user access and do some Gorilla Testingthat will help you more than the amount of money spent on stress testing.Obviously you should have at laest oe or two good front ends and a good beefy admin server to do your search stuff.