Why Oscript solutions are not passé

Over the years in my work with OpenText Livelink I am fortunate to associate myself with like minds and who like me share their code,goodwill and advice selflessly .I cherish and hold their friendship above all professional accolades I have ever got.Recently I had lunch with Ossie Moore  Ossie’s LinkedIn it was very short and I would have loved it to last longer as we could talk shop endlessly.Most of what I write in this has bearing to our lunch and he did steer this post. Ossie  for people who do not know him is quite a character.Let’s say you had a requirement,most of us would probably settle for a pretty decent code either in oscript,lapi as that was the predominant development methods one had at the time.Not Ossie he would probably write our kind of code and before releasing it would have at least two more better implementations,he would have thought about the user coming through the web,the other interfaces LL explorer and so on.That is the difference.

Let’s look at ECM deployments in major organizations.

X Organization has identified a massive unstructured data problem.X has a new manager on the block who has had experience with a technology vendor.Insert <vendorname> here. This is how most ECM deployments happen a lot of people passionate in some technology who would solutionize the deployment to suit the vendor.Most of them are assisted by Gartner, Forrester those kinds .I started working in OT technology before these rating companies came up with a barometer of wants so I am going to continue with my cynicism.

So after X Organization will roll out a OOB approach ,people are brought in.Naturally if the product is user friendly and easy its use will grow.Simple things like user activity,data growth etc are good ways to identify its usefulness.Now most people would want to send the link of the objects hoping that it won’t break on re-organizing the structure or re parenting as most of document work revolves around.So you may want to invest on something that people have put thought on like a GUID based system(Livelink, DCTM).Frankly I know only Livelink and DCTM very little. I hate Sharepoint (we call it $carepoint) with a passion because the smallest of things are so difficult to get  done.Good UI all the world has gone for it anything complex you have to buy or build with an array of .net developers different frameworks the works.It also could be that I have only been programming in that and spending time in LL so over the years the idiosyncracies of builder,CSIDE none of these matter to me now because Oscript really has not changed every much.Oscript is just a huge possibilities language. One of my mentors very famous in OT circles John Simon would joke like this.You have a problem you try solving in Oscript.You will spend about 3 to 4 days on it later to find out that what you need is totally available,no need to kludge just poor documentation that is all.

So if you were eager to get into using livelink to its full extent do these things

  1. Challenge a OT person or a OT solution marketer to what they are offering you. Challenge them extremely hard if it is offered at $$$ a user or so.That is how they make their money on licenses.
  2. Challenge if the pitch is towards Oscript is bad that spoils your chances of upgrade.There’s an element of truth in that.I have seen people hacking their way into core code,weblingo such no nos that shows inexperience and desperation. Oscript is not understanding the creation of a request handler,webnodeaction or eventscripting it is an all round knowledge of the product the passion about the product.You can immediately tell if the programmer knows anything because if they start saying about new subtypes you may want to ask them what prompted them to create a subtype, because many times those people have access to the oscript tutorials and the “Hello World” of oscript introduces them to that.You really want a Oscripter who will follow what OT advises them of doing in that case it is very rarely  a bad fit.
  3. Challenge if the pitched solution ends up working only in the webgui what about EC,what about SOAP,what about REST,what about Search

Oscript has years to go if the LL solution is still being used by customers.It can be learned to work wonders for you.The other aspect and full credit to Ossie here is LL software is the most open product much more open than opensource a oscripter can see the entire internals of it so people like me,John,Ossie all have found bugs and we all engage the good OT developers on our finds.Most senior and passionate oscripters continue to do so.There ‘s so much openness and welcoming in the OT programming community,people like David Templeton ,Kyle  Swidrowich and a countless amount of good people who have been striving to keep this open and getting communities to try it out for a better product.They have helped me and a lot of others and you can also get in into the action.

You just need the drive and passion to master LL that is all it takes.Well that maybe true for any technology 🙂

Advertisement

To Undelete or Not

Many times Administrators and users do things stupidly and want to cover their tracks quickly and without creating unwanted attention.Most people working with livelink know that the application is responsible for the data integrity and referential integrity.So to do any proper reversal you would need to script something back.But again here you are at a disadvantage.If you resort to standard OT support the maximum you are going to get is “you have an error that is not easily fixable and we would recommend you to fix it with our professional services team”.To a org or a small company they do not have that many resources or money to do that,hence the administrator or  a power user turned developer would try to see if something can be done.Lo and behold the KB is consulted and most members who would like to show off will post in their comments.Now that goes for me also because there isn’t a day that goes by when I cannot respond to something.However I am very judicious when I tamper with the livelink database.That is really because I understand oscript and I see the many myriad transactions that happen when it is working,doing upgrades and such.But in many organizations people are well versed in Live Reports and many now have the Web Reports product which is an excellent alternative to certain kinds of programming or utility programs .This is where the start of the problems happen.Say you accepted a bad advice like fixing a column or data point in a table that the “learned KB people” advised,you would not probably notice this way into months or years at part of a upgrade so what you did in earnest will land you in irrecoverable or costly difficulties.So RESIST the urge to change values in OT given tables.If it is your table and you are responsible for it do so by all means.There is no problem with you playing in a VM and understanding it but just don’t think that touching llattrblobdata or llattrdata is all it requires to do category updates 🙂

That doesn’t absolve OT as a company,it is expected to do its fair share.Somethings might be

  1. Perhaps with support paid the customer gets 10 free incident fixes. OT absorbs the development  cost and passes the solution to a would be  admin/dev in the organization.
  2. A list of compiled easy to use reports and utilities,perhaps web reports, perhaps a compiled java app that uses REST or WSAPI or even a utility oscript module.
  3. Some re-assurance from support to quell the panic and perhaps advise the customer to do another thing like tell them a deleted user/users necessarily need not be considered a panic situation.

 

I had a short stint working with Documentum and I did not know if the utilities it delivered were high priced like what OT does but it had a language called DQL (Documentum Query Language).So any guys who is knowledgeable on the schema could run reports and turn them into DQL commands which would basically honor data integrity and referential integrity for them.This is in a way WR does but it takes enormous patience and complexity to work with it which again would mean should I have developers supporting my application.I would advise WR people in OT to provide clear crisp examples 1 to 5 liners that should work with any livelink any schema,kind of a pre built library more like the canned Live Reports

 

I tested a user deletion today and tried to see what all tables would be affected so for KB users if you want to see it is here.hopefully I will find a cheap hosting provider to put my content and not in KB as I have been burned in my earlier attempts.

“Why it may not be a good idea to update livelink tables.docx” can be accessed via the following link: https://knowledge.opentext.com/knowledge/cs.dll/Properties/61862916

 

 

 

General Help Series 7 -Upgrading a Livelink System The 10.5 Part

Start with my first article in this series

BTW this is kind of a cookery show kind where they put the main course in the oven and take another finished one .I did not accomplish any of this in one day it took a lot of homework and trial runs.I read the case studies by the great Rob Coutts many times over.

For sanity we decided to use a new box for CS10.5

  1. Created a CS10.5 system added all the modules that we needed and connected to a dummy database and dummy EFS. Just for kicks we added some that did not exist in 971 like classifications and recman(forward thinking).Rudimentary checks to ascertain base functionality.Note all hot fixes ,patches relevant were also put.
  2. Cloned this server to 3 others for Front Ends and Agents.Just architectural things as after the LL installer creates the services just copy the OTHOME over ,give it distinct names etc.Sometimes if your copies are done with IIS and or any of livelink services running sometime corrupt dll copies end up so make sure you follow proper protocol.In most cases you do not need to run the optional modules installer on your 2nd to n CS servers.Most livelink needed dll’s oscript code will try to push it to windows systems folders on startup every time.
  3. Saved this and copied it as backups.
  4. It is worth verifying the DB type requirements for 10.5 for e,g your 971 DB might have been 11.2.0,2 but Cs10.5 need 11.2.0.4.these are database chores that your DBA should know. Always give lots of oracle memory like SGA etc .OT has a tech article on how it is to be done.
  5. Connected the binary to the prepped up 971 database.one of the first screens was the 971 box for Admin server,changed it to the new box .
  6. Wanted re-starts it is a good sign.One thing I do is since I know the DB Upgrade is done by a single thread I made the threads 8 on it.If you have lots of threads they indirectly add to the Oracle load.I also do my upgrades with debug=2 and wantlogs=true.It is almost impossible to live without BareTail or a good ASCII Editor.
  7. The other threads on it will just produce a warning message that the “upgrade is in progress”.
  8. Once the upgrade starts you should see the thread<nn>.out file issue oscript commands and its corresponding connect<nn>.out doing DB work.Your heart will rejoice if you have them on baretail 🙂 In my case a core upgrade will move my 971 schema(6.0.8) to CS10.5 (6.2.58).If it does that it is a successful upgrade. Times will vary depending on data and content and horsepower of your DB.Do not try to run arithmetic on the numbers does not mean continuous although looking at the db upgrade log one can see the steps.The places it breaks clues OT and even if you know Oscript you can chase a lot f them.In my case it was let’s say a very smooth upgrade.
  9. Since we had optional modules after the core upgrade all optional module schemas were introduced or upgraded.You should see it in bold letters in the pre upgrade page. “Your Content Server schema will Upgrade from <nn> to <nn>”.Classifications module will be introduced,Recman would be introduced(New things), ADN will upgrade from <nn> to <nn> .BTW I had to downgrade A D N to a lower one because the latest and greatest won’t cleanly co operate with the upgrade.You will not notice the difficulty if your database is new.
  10. The reason why OT says to upgrade OPT modules after core upgrade is it is easier for them to pinpoint the failure.
  11. The way livelink code works every time you re-start is it will read Opentext.ini and one of the first modules that loads is the DBWIZAPI .It will first try to ascertain whether core schema is what the binary says its is otherwise it will force a DB Upgrade.Before releasing the software to listen to request on 2099(suggested default),it will enumerate the module section ini to the schema section INI. So if one box had a schema aware module such as form and its INI said 2.0.4 and this box’s form module said 2.0.3 it will trigger the familiar error “You have blah blah in database but blah blah module is lower/higher”.That is the whole reason why experienced people and nowadays OT says get one server(Anchor) done correctly and clone that on other boxes.So whether you install opt modules after or before livelink code always checks this every time you re-start.
  12. The Admin Service left running on any box is no problem.It contains java code to talk to the search server.Also memcached is dependent on Admin Service.I don’t really know what cluster agent does  but it is OT’s answer to smart patching.Once you resister a Admin server to the database that is when the box can be used for certain things like augmenting search.
  13. We had to create a search as advised by OT so that items would come out indexed faster.I am always amazed at the search code but unfortunately one cannot see it completely as Oscript just talks to the java code.Perhaps if I had time and code was de compilable( I seriously doubt it,I have a feeling java is talking to C++ compiled code internally,how else would it scale so well).To have people always complain about search ….
  14. I installed all our custom modules created clones of this and called it many different roles and we were pleasantly done
  15. The rest was mainly releasing it to customers to test have their suggestions and voila…

General Help Series 6 -Upgrading a Livelink System the 971 Part

Recently I was tasked with  upgrading our systems from its initial version to two times jump so 971 to 10.5.I will chronicle my experiences here. Perhaps provide a structure.I also want to quell some myths about the whole process just to make sure you can do this logically and correctly.

First create a draft plan,in my draft plan I included this.I would need at least one VM that can house or Production 971 Binaries,a clone of the database and a clone of the EFS.If one had been storing items in perhaps Archive Server one should have a playing copy of that also. An OTAS based livelink upgrade can pose its challenges if not done correctly but it is no different than a EFS if you do it wrong you can write test data into your productive store 🙂  This involves talking to other people like DBA’s and Storage folks as a heavily used LL system can have tons and tons of data.You will most likely encounter the after effects of long time use,different administrators and their styles etc.So after a few failed  attempts I understood why OT says a DB5 verification is very important.The Verification basically runs single threaded and will try to pinpoint anomalies in the database.Most likely it will pinpoint content loss such as bulk imports etc without actual version files et al .In my case the database had wrong pointers,duplicate dataid’s (wow),removed KUAf Id’s(somebody had fun with a oracle tool).So after going back and forth between OT and us I just thought like this and created a program(oscript) to check important structures.I list 3,your mileage will vary but essentially I grabbed all categories ,form templates and wf map structures.None of these checks happen in DB5 it just looks for existence.In hilarious cases I uncovered a category data structure mysteriously would turn up as a drawing PDF.One could argue that the users would have noticed, so this was an old category that nobody noticed.You don’t have to do this but each of those anomalies would prevent a upgrade and it is back and forth with OT support so I did it just to get some time advantage.

The second thing was I had to get all the optional modules  and our system is no different from any other system a lot of optional modules some inserting schema some not.

The third thing was our systems were customized with Oscript to provide enhanced user experience  as well as awesome home written modules that takes the tedium of long running things like category upgrades,permission pushes et al a very intelligent add on to the OI module that people can use generically to upload their data like that.So our team sat down and thought of what we could retire and what we should re-code,refactor so many of them were thrown out .Many useful ones we kept.We particularly liked the Distributed Agent framework,ours is very similar to that but if time comes and there is an appetite we would code it for that.

About the re-coding effort here’s the clunker,we were to use CSIDE .Naturally all of my team are hard core Builder aficionados and boy that was a journey.I probably think we were the first org to do something with CSIDE so naturally I gave them a lot of screenshots and dumps and OT dev was very receptive.In extremely tight time constraints we developed parallel modules in 97 1 and 10 and just just sucked them up for our SRC code maintaining thing called TFS. We also learned how to work parallel on oscript but not being a code churning house our code is not that complicated so if I was working on a module somebody else was working on another as well.

So the environment prepared was

  1. 971 VM with the same Oracle client and the same DB type of clone.
  2. I removed Prod EFS and Prod Admin server info from clone.
  3. We decided to rebuild search index as nobody had any clue whether it was good and it was eons before new search technology.
  4. A base install of LL was done and the working 971 binary copy from prod was  put as an overlay.Now many starting out people do not know this technique where you can get a copy of your prod binary replete with patches et al into a playing system.BTW this is what you hear as the “Parallel Upgrade” approach .Its database connection was redefined .In very olden times when there was no differentiation between 32 and 62 bits and VM’s were not popular and cheap, it was possible to use the existing boxes and run the upgrade.This was the “Update an Instance” method.This is largely vestigial at this time and very error prone not to say you would have a real hard time going back to the working version 🙂
  5. All old 971 custom modules were removed so this database had only knowledge of OT software core and optional.
  6. Did my health check I said earlier like look at cats,form templates and wf maps,removed any that would hamper a upgrade.

 

At this point one can take this to a CS10.5 binary and start the upgrade.That is my next article.

 

BTW- I am not a novice when it comes to Upgrades I started using livelink software in 1999 and has worked in version 8.1.5 so I pretty much know the heartbeat of an upgrade and gets pleasure in challenges and how one methodically removes the challenges.Do not try an upgrade if you have not dry run the procedure without fail a couple of times.In most cases the upgrade can be done timely the planning can take weeks,if not months.