Over the years in my work with OpenTextLivelink I am fortunate to associate myself with like minds and who like me share their code,goodwill and advice selflessly .I cherish and hold their friendship above all professional accolades I have ever got.Recently I had lunch with Ossie MooreOssie’s LinkedIn it was very short and I would have loved it to last longer as we could talk shop endlessly.Most of what I write in this has bearing to our lunch and he did steer this post. Ossie for people who do not know him is quite a character.Let’s say you had a requirement,most of us would probably settle for a pretty decent code either in oscript,lapi as that was the predominant development methods one had at the time.Not Ossie he would probably write our kind of code and before releasing it would have at least two more better implementations,he would have thought about the user coming through the web,the other interfaces LL explorer and so on.That is the difference.
Let’s look at ECM deployments in major organizations.
X Organization has identified a massive unstructured data problem.X has a new manager on the block who has had experience with a technology vendor.Insert <vendorname> here. This is how most ECM deployments happen a lot of people passionate in some technology who would solutionize the deployment to suit the vendor.Most of them are assisted by Gartner, Forrester those kinds .I started working in OT technology before these rating companies came up with a barometer of wants so I am going to continue with my cynicism.
So after X Organization will roll out a OOB approach ,people are brought in.Naturally if the product is user friendly and easy its use will grow.Simple things like user activity,data growth etc are good ways to identify its usefulness.Now most people would want to send the link of the objects hoping that it won’t break on re-organizing the structure or re parenting as most of document work revolves around.So you may want to invest on something that people have put thought on like a GUID based system(Livelink, DCTM).Frankly I know only Livelink and DCTM very little. I hate Sharepoint (we call it $carepoint) with a passion because the smallest of things are so difficult to get done.Good UI all the world has gone for it anything complex you have to buy or build with an array of .net developers different frameworks the works.It also could be that I have only been programming in that and spending time in LL so over the years the idiosyncracies of builder,CSIDE none of these matter to me now because Oscript really has not changed every much.Oscript is just a huge possibilities language. One of my mentors very famous in OT circles John Simon would joke like this.You have a problem you try solving in Oscript.You will spend about 3 to 4 days on it later to find out that what you need is totally available,no need to kludge just poor documentation that is all.
So if you were eager to get into using livelink to its full extent do these things
Challenge a OT person or a OT solution marketer to what they are offering you. Challenge them extremely hard if it is offered at $$$ a user or so.That is how they make their money on licenses.
Challenge if the pitch is towards Oscript is bad that spoils your chances of upgrade.There’s an element of truth in that.I have seen people hacking their way into core code,weblingo such no nos that shows inexperience and desperation. Oscript is not understanding the creation of a request handler,webnodeaction or eventscripting it is an all round knowledge of the product the passion about the product.You can immediately tell if the programmer knows anything because if they start saying about new subtypes you may want to ask them what prompted them to create a subtype, because many times those people have access to the oscript tutorials and the “Hello World” of oscript introduces them to that.You really want a Oscripter who will follow what OT advises them of doing in that case it is very rarely a bad fit.
Challenge if the pitched solution ends up working only in the webgui what about EC,what about SOAP,what about REST,what about Search
Oscript has years to go if the LL solution is still being used by customers.It can be learned to work wonders for you.The other aspect and full credit to Ossie here is LL software is the most open product much more open than opensource a oscripter can see the entire internals of it so people like me,John,Ossie all have found bugs and we all engage the good OT developers on our finds.Most senior and passionate oscripters continue to do so.There ‘s so much openness and welcoming in the OT programming community,people like David Templeton ,Kyle Swidrowich and a countless amount of good people who have been striving to keep this open and getting communities to try it out for a better product.They have helped me and a lot of others and you can also get in into the action.
You just need the drive and passion to master LL that is all it takes.Well that maybe true for any technology 🙂
BTW this is kind of a cookery show kind where they put the main course in the oven and take another finished one .I did not accomplish any of this in one day it took a lot of homework and trial runs.I read the case studies by the great Rob Coutts many times over.
For sanity we decided to use a new box for CS10.5
Created a CS10.5 system added all the modules that we needed and connected to a dummy database and dummy EFS. Just for kicks we added some that did not exist in 971 like classifications and recman(forward thinking).Rudimentary checks to ascertain base functionality.Note all hot fixes ,patches relevant were also put.
Cloned this server to 3 others for Front Ends and Agents.Just architectural things as after the LL installer creates the services just copy the OTHOME over ,give it distinct names etc.Sometimes if your copies are done with IIS and or any of livelink services running sometime corrupt dll copies end up so make sure you follow proper protocol.In most cases you do not need to run the optional modules installer on your 2nd to n CS servers.Most livelink needed dll’s oscript code will try to push it to windows systems folders on startup every time.
Saved this and copied it as backups.
It is worth verifying the DB type requirements for 10.5 for e,g your 971 DB might have been 11.2.0,2 but Cs10.5 need 126.96.36.199.these are database chores that your DBA should know. Always give lots of oracle memory like SGA etc .OT has a tech article on how it is to be done.
Connected the binary to the prepped up 971 database.one of the first screens was the 971 box for Admin server,changed it to the new box .
Wanted re-starts it is a good sign.One thing I do is since I know the DB Upgrade is done by a single thread I made the threads 8 on it.If you have lots of threads they indirectly add to the Oracle load.I also do my upgrades with debug=2 and wantlogs=true.It is almost impossible to live without BareTail or a good ASCII Editor.
The other threads on it will just produce a warning message that the “upgrade is in progress”.
Once the upgrade starts you should see the thread<nn>.out file issue oscript commands and its corresponding connect<nn>.out doing DB work.Your heart will rejoice if you have them on baretail 🙂 In my case a core upgrade will move my 971 schema(6.0.8) to CS10.5 (6.2.58).If it does that it is a successful upgrade. Times will vary depending on data and content and horsepower of your DB.Do not try to run arithmetic on the numbers does not mean continuous although looking at the db upgrade log one can see the steps.The places it breaks clues OT and even if you know Oscript you can chase a lot f them.In my case it was let’s say a very smooth upgrade.
Since we had optional modules after the core upgrade all optional module schemas were introduced or upgraded.You should see it in bold letters in the pre upgrade page. “Your Content Server schema will Upgrade from <nn> to <nn>”.Classifications module will be introduced,Recman would be introduced(New things), ADN will upgrade from <nn> to <nn> .BTW I had to downgrade A D N to a lower one because the latest and greatest won’t cleanly co operate with the upgrade.You will not notice the difficulty if your database is new.
The reason why OT says to upgrade OPT modules after core upgrade is it is easier for them to pinpoint the failure.
The way livelink code works every time you re-start is it will read Opentext.ini and one of the first modules that loads is the DBWIZAPI .It will first try to ascertain whether core schema is what the binary says its is otherwise it will force a DB Upgrade.Before releasing the software to listen to request on 2099(suggested default),it will enumerate the module section ini to the schema section INI. So if one box had a schema aware module such as form and its INI said 2.0.4 and this box’s form module said 2.0.3 it will trigger the familiar error “You have blah blah in database but blah blah module is lower/higher”.That is the whole reason why experienced people and nowadays OT says get one server(Anchor) done correctly and clone that on other boxes.So whether you install opt modules after or before livelink code always checks this every time you re-start.
The Admin Service left running on any box is no problem.It contains java code to talk to the search server.Also memcached is dependent on Admin Service.I don’t really know what cluster agent does but it is OT’s answer to smart patching.Once you resister a Admin server to the database that is when the box can be used for certain things like augmenting search.
We had to create a search as advised by OT so that items would come out indexed faster.I am always amazed at the search code but unfortunately one cannot see it completely as Oscript just talks to the java code.Perhaps if I had time and code was de compilable( I seriously doubt it,I have a feeling java is talking to C++ compiled code internally,how else would it scale so well).To have people always complain about search ….
I installed all our custom modules created clones of this and called it many different roles and we were pleasantly done
The rest was mainly releasing it to customers to test have their suggestions and voila…
Recently I was tasked with upgrading our systems from its initial version to two times jump so 971 to 10.5.I will chronicle my experiences here. Perhaps provide a structure.I also want to quell some myths about the whole process just to make sure you can do this logically and correctly.
First create a draft plan,in my draft plan I included this.I would need at least one VM that can house or Production 971 Binaries,a clone of the database and a clone of the EFS.If one had been storing items in perhaps Archive Server one should have a playing copy of that also. An OTAS based livelink upgrade can pose its challenges if not done correctly but it is no different than a EFS if you do it wrong you can write test data into your productive store 🙂 This involves talking to other people like DBA’s and Storage folks as a heavily used LL system can have tons and tons of data.You will most likely encounter the after effects of long time use,different administrators and their styles etc.So after a few failed attempts I understood why OT says a DB5 verification is very important.The Verification basically runs single threaded and will try to pinpoint anomalies in the database.Most likely it will pinpoint content loss such as bulk imports etc without actual version files et al .In my case the database had wrong pointers,duplicate dataid’s (wow),removed KUAf Id’s(somebody had fun with a oracle tool).So after going back and forth between OT and us I just thought like this and created a program(oscript) to check important structures.I list 3,your mileage will vary but essentially I grabbed all categories ,form templates and wf map structures.None of these checks happen in DB5 it just looks for existence.In hilarious cases I uncovered a category data structure mysteriously would turn up as a drawing PDF.One could argue that the users would have noticed, so this was an old category that nobody noticed.You don’t have to do this but each of those anomalies would prevent a upgrade and it is back and forth with OT support so I did it just to get some time advantage.
The second thing was I had to get all the optional modules and our system is no different from any other system a lot of optional modules some inserting schema some not.
The third thing was our systems were customized with Oscript to provide enhanced user experience as well as awesome home written modules that takes the tedium of long running things like category upgrades,permission pushes et al a very intelligent add on to the OI module that people can use generically to upload their data like that.So our team sat down and thought of what we could retire and what we should re-code,refactor so many of them were thrown out .Many useful ones we kept.We particularly liked the Distributed Agent framework,ours is very similar to that but if time comes and there is an appetite we would code it for that.
About the re-coding effort here’s the clunker,we were to use CSIDE .Naturally all of my team are hard core Builder aficionados and boy that was a journey.I probably think we were the first org to do something with CSIDE so naturally I gave them a lot of screenshots and dumps and OT dev was very receptive.In extremely tight time constraints we developed parallel modules in 97 1 and 10 and just just sucked them up for our SRC code maintaining thing called TFS. We also learned how to work parallel on oscript but not being a code churning house our code is not that complicated so if I was working on a module somebody else was working on another as well.
So the environment prepared was
971 VM with the same Oracle client and the same DB type of clone.
I removed Prod EFS and Prod Admin server info from clone.
We decided to rebuild search index as nobody had any clue whether it was good and it was eons before new search technology.
A base install of LL was done and the working 971 binary copy from prod was put as an overlay.Now many starting out people do not know this technique where you can get a copy of your prod binary replete with patches et al into a playing system.BTW this is what you hear as the “Parallel Upgrade” approach .Its database connection was redefined .In very olden times when there was no differentiation between 32 and 62 bits and VM’s were not popular and cheap, it was possible to use the existing boxes and run the upgrade.This was the “Update an Instance” method.This is largely vestigial at this time and very error prone not to say you would have a real hard time going back to the working version 🙂
All old 971 custom modules were removed so this database had only knowledge of OT software core and optional.
Did my health check I said earlier like look at cats,form templates and wf maps,removed any that would hamper a upgrade.
At this point one can take this to a CS10.5 binary and start the upgrade.That is my next article.
BTW- I am not a novice when it comes to Upgrades I started using livelink software in 1999 and has worked in version 8.1.5 so I pretty much know the heartbeat of an upgrade and gets pleasure in challenges and how one methodically removes the challenges.Do not try an upgrade if you have not dry run the procedure without fail a couple of times.In most cases the upgrade can be done timely the planning can take weeks,if not months.
A Post where I will try to share commonly looked for advise,many time that does not require programming but just configuration,I hope to make this in a question answer format.I will also try to answer commonly encountered problems if it does not require me to stop my paid work .
Category,Attributes and the mandatory aspects of it.
Categories is a template in livelink so that you can “model” your metadata requirements.In essence after you “create” the category and create meaningful “leaves” or “attributes”,you can allow others to use it as models where you can use awesome livelink search api .Here’s the RUB most container objects like Folders does not make the “mandatory” compulsory but if you try to add a versionable subtype like “Document” it becomes mandatory.Now imagine somebody starts using a OT tool like EC or Explorer to copy a file system full of things into livelink,the folder gets replicated but livelink will stop the user from proceeding forward until documents get filled with mandatory metadata.If memory serves me right OT will even allow a flag that says substitute a top filled attribute to be trickled in.Now if you are a programmer,integrator or even a casual user trying to get things done in a jiffy what you need to know is the “mandatory” aspect is tied to a livelink server instance(not globally enforced) as well as reversible in a subtype setting.So you can go to the admin.index and try to navigate to the command “Configure Required Attributes”.You will find that folder is un checked as well as document is checked,this is what prompts the mandatory pop up to occur when you add things.Note if you defeat what the modelers intended you will end up having data that does not get returned in a meta data search.Also try to use the livelink GUI as much as possible before you start programming as programming just replaces the human clicker at the gui, this is often overlooked by all new comers.
Added:Hugh pointed out the fallacy of a default attribute.Yes it is true that people will not put values if it can be inherited.But by the same token if the category is in need pf a upgrade and has 1 million items to be upgraded,it just adds to the cost of programming or operational efficiency.So Moral just do your design judiciously thinking that not only GUI users but there might be times you need to change the values around 🙂
BTW-To supply default values to help in a upgrade is what I have done most in my job 😉 Many times it is a quick and dirty job with the mandatory values read off a file.In my current employment we have a beautiful batch framework that is able to do all bells and whistles almost could sell this to OT.
Form Based Livelink Workflows and WF Attribute based workflows
Oscript is a 4GL that is the basis of the application that the OpenText company adopted when they chose to market Livelink the Document Management System.It is safe to say that anything the Livelink product lacks and is a valid requirement can be coded in this language.The power of the the Oscript language takes root in the philosophy that was prevalent in the 60’s . Oscript is a superior offering to traditional OOP languages such as Java or C++ that is what Pundits say.
To a trained Oscripter one is able to understand how livelink code is received as well as change it.Now change to any COTS product comes at a cost.That is where the strategy as well as overall grasp in the product comes into mind.If the organization has decided for itself that it will spend the big bucks on livelink, one approach is to identify the ways to use it more.Where the organization is going to make mistakes is when it sees the cost of the “customization”.Many decision makers wrongly decide to go after fly by night programmers/vendors to be totally left in the fray.OT has a professional vested interest to sell customers more code and services so I have been in decisions where sales and marketing will say do not customize livelink, but that is only half truth in them.If you buy these OT customizations, they are almost certain to be written in Oscript .In many cases it would have been written by senior Oscripters with thought on maintenance aspects and upgrade ability as well.The same kind of service can be bought by other Oscript houses,in many cases these are extremely talented ex OT ‘ers. Many of the Professional Services modules sometimes outgrows the intents and becomes a marketable module as well.I have no knowledge and am just speculating on that.Once you have identified and decided to use livelink the best avenue is to hire or ask services of reputed vendors.Modules that contained Oscript code that I used to code in 1999 (first time was on a livelink 8 system) livelink 9.1 version I can almost compile in the modern 10.5 version.My entire career was and is fruitful because I tried to interact/collaborate and share with the OT team in the KB as well as smaller websites like Greg Griffiths.I was also able to mentor myself amongst very die hard oscripters that I knew from work and just by tracing code to find bugs that needed closure before OT could provide fixes.The knowledge of OT code also allows me to become a better application architect,administrator and integrator.
If the organization is strong in Java and .Net skills then note that the user interface at this time cannot be changed using those languages,however you may use exposed API’s to make functioning applications that use Livelink as the data store
Typical things organizations can and should do.
Create implementations of Nodes just like Folder or Document if there is a need for it to be there.Just because OT shows you a “Addressbook” or “Contacts” module not every livelink installation in the world needs new subtypes.It is there if you want to use it ,if there is a complex business requirement that involves a “document” like implementation or a “folder” like implementation.Do search in the KB for such terms as “invasive customizations”, overrides,orphans,subclasses, customizationsRT, weblingo customization.
Add Commands in the Nodes these are basically very easy to do in a short order.These two are taught very nicely in class as well.
Write agents,distributed agents,expose to columns and facets api something that OT does not give you OOB.
Extending SOAP and REST can only be done using Oscript.If you are a shop who has a need to program a integration using java or C# and you see a “GAP” in existing offering it is rather easy for a Oscripter to extend that.
Opentext Programmers have black boxed thing so that many things acts as internal API’s.For e.g we are able to do work without understanding how the “Node” itself is created in livelink. Other examples are the way Oscript can talk to the search system.
LAPI and its client installer has become very hard to find. Moreover clients written in LAPI in say Java/.NET will only work if your Livelink a.k.a Content Server is of version less than CS16.Readers who are new to LL programming is encouraged to read this to the approach and not to the exact lines of the code.What I mean is when you used to program in LAPI you were basically passing parameters to discrete calls by modelling it based on the webgui of livelink .SOAP based webservices called CWS is also the same,so if you do not try to do the task in the webgui and try to understand the business rules you will almost have no success in CWS too. OT is notorious for not putting fully functioning use cases and a walk through,so whenever possible I write code assuming the user has not worked in Livelink for X number of years and try to educate you all. Livelink,Content Server,Enterprise Server all of this has been Livelink’s marketing brand name changes over the years.CS i sused in many of the integrations like AGA, XECM, RMLINK and you know you are programming against livelink if you see a link that looks like this http(s)://somefriendlyURL/livelink.exe|cs.exe|llisapi.dll|cs.dll|livelink.In many places SAP/ SP /Exchange will be configured to talk to Archive Server and then they will use Livelink to read into archive server and turn that into LL objects for better presentment/RM and other aspects. The AGA product is moving away from LAPI(not sure totally or not) to REST API in LL.
Many people seem to have understood my post about LAPI
The AGA (Sharepoint to Livelink Integration) product is the only officially supported OT product that seems to use old lapi binaries.Most of us have moved away for using it although when you are in a time crunch you can still use it.If you still want to use lapi make sure you are writing client code in java which takes care of most inconsistencies regarding 64/vs 32 bit J# problems.If you are a modern day .net buff see how old lapi can be run in new .net ways.
Back to CWS/EWS- Livelink is branded Content Server so it is no surprise that its API reflects that.The binaries that make up the wsdl is delivered without any additional installation in the same file path as the livelink server is installed.So if you run your livelink in a IIS webserver,you just “expose” the code from OT into a application.If you run your livelink in a Tomcat server then just dump the war file and TC will do the magic.The Web.Config in both he deployments make sure where it can talk to.For e.g in a livelink install if you see the Web.Config say localhost & 2099 when you call the beautiful code form your client it sends all that “stuff” into the livelink “server” listening on “localhost” and “2099”.None of this is hard coded and most livelink organizations will not use localhost resolution.And as a client api programmer you don’t need to worry too much.Ask for a webgui account before you start any livelink programming.It will keep you sane.What you are programming with a language is the same “business rules'” that livelink will throw to your code.Typical “new” programmer things like expecting category workspaces,enterprise workspaces to be 2004 & 2000 should be avoided the mistakes are very commonly seen in even vended applications.there are many livelinks in the world that do not have those ids.BTW like many others I also thought that to write java client code I had to have livelink running in a java app server.No I regularly now write .Net clients against a TC deployed WSAPI and java clients against a IIS deployed WSAPI. I also have written oscript based CWS things that is only needed when OT supplied stubs & proxies won’t cut a particular task.BTW when you install Records management,Physical Objects,Classifications they all deploy their CWS API.As your administrator to deploy that if you wanted to do programming against those functionalities . I advocate bundling everything under one website it makes the admin’s life easier well they are unsupported by OT but I do it anyway
Once you are prepared to first write your lines of code you may want to understand what you are doing.
OT markets a TC (java stack) application called OTDS .It is some very simple java code that interacts with livelink,active directory or other LDAP sources .It relies on your call being passed off to a OTDS server which uses industry standard methods to create a auth token.In many simple installs it is akin to the cookie that is given to your browser after login.OT provides samples using that.It also will be mandatory in several OT installs which are primarily not a DMS.Like SAP integration,AGA integration and so on.You now have “kewl” tools like Fiddler,Wireshark etc to debug many things
If you have a authentication token half the battle is one,you can now try simple things like adding folders,documents,applying categories to etc when you stumble simply bring up the web ui browser and see what you are trying is possible in “that” livelink.Because code you may have got working in another project may not because every livelink install can define its business rules differently 🙂
These conditions can be tested in a matter of minutes by using SOAPUI .Once you have gotten in with a auth token and accessed enterprise workspace or perosnal workspaces,then switch to a java client ide or a .net ide . SOAPUI can only be used for certain aspect,things will get extremely complex when you encounter category/attribute data structures(funny the push to get away from llvalue data structures in lapi is still there albeit using data structures like AttributeGroups) so soapui will not cut it.
Other Buzzwords you may hear in a Livelink based project
REST API. OT has been forced by the programming world to have a REST API implementation available in kick butt livelink installs.Note every call with a REST API is akin to you accessing livelink via a browser so save yourself some head ache by not designing bulk loader things with it.You may want to show off your livelink in a mobile application.BTW I think AppWorks is the marketing hype name for that.
Search API-when you are in a livelink session and type a search term it is the Search API that is working for you.You can create cool search based apps with it.For heaven sake resist the temptation to write lapi or CWS code for livelink search it ain’t very easy and it si convoluted.
ELS API- It is the wrapper connecting Livelink and archive server .The definition is not technically correct.If you wanted a SAP system to use Records management or Business Work Spaces you would put this middle ware called (old name RCS). This is a solution… means utimately you will have to pay people who know SAP Basis configuartion & Functional , Livelink configuration and Archive server configuration.Recently a guy asked me about CMIS . I looked at it and I told him to concentrate on the 3 aspects and he figured it out note that CMIS is a industry protocol and ot uses that in a ELS API layer for some objective I know not. Most solutions you will hear terms like Archive Link ,RM Link,ECM Link