Why Oscript solutions are not passé

Over the years in my work with OpenText Livelink I am fortunate to associate myself with like minds and who like me share their code,goodwill and advice selflessly .I cherish and hold their friendship above all professional accolades I have ever got.Recently I had lunch with Ossie Moore  Ossie’s LinkedIn it was very short and I would have loved it to last longer as we could talk shop endlessly.Most of what I write in this has bearing to our lunch and he did steer this post. Ossie  for people who do not know him is quite a character.Let’s say you had a requirement,most of us would probably settle for a pretty decent code either in oscript,lapi as that was the predominant development methods one had at the time.Not Ossie he would probably write our kind of code and before releasing it would have at least two more better implementations,he would have thought about the user coming through the web,the other interfaces LL explorer and so on.That is the difference.

Let’s look at ECM deployments in major organizations.

X Organization has identified a massive unstructured data problem.X has a new manager on the block who has had experience with a technology vendor.Insert <vendorname> here. This is how most ECM deployments happen a lot of people passionate in some technology who would solutionize the deployment to suit the vendor.Most of them are assisted by Gartner, Forrester those kinds .I started working in OT technology before these rating companies came up with a barometer of wants so I am going to continue with my cynicism.

So after X Organization will roll out a OOB approach ,people are brought in.Naturally if the product is user friendly and easy its use will grow.Simple things like user activity,data growth etc are good ways to identify its usefulness.Now most people would want to send the link of the objects hoping that it won’t break on re-organizing the structure or re parenting as most of document work revolves around.So you may want to invest on something that people have put thought on like a GUID based system(Livelink, DCTM).Frankly I know only Livelink and DCTM very little. I hate Sharepoint (we call it $carepoint) with a passion because the smallest of things are so difficult to get  done.Good UI all the world has gone for it anything complex you have to buy or build with an array of .net developers different frameworks the works.It also could be that I have only been programming in that and spending time in LL so over the years the idiosyncracies of builder,CSIDE none of these matter to me now because Oscript really has not changed every much.Oscript is just a huge possibilities language. One of my mentors very famous in OT circles John Simon would joke like this.You have a problem you try solving in Oscript.You will spend about 3 to 4 days on it later to find out that what you need is totally available,no need to kludge just poor documentation that is all.

So if you were eager to get into using livelink to its full extent do these things

  1. Challenge a OT person or a OT solution marketer to what they are offering you. Challenge them extremely hard if it is offered at $$$ a user or so.That is how they make their money on licenses.
  2. Challenge if the pitch is towards Oscript is bad that spoils your chances of upgrade.There’s an element of truth in that.I have seen people hacking their way into core code,weblingo such no nos that shows inexperience and desperation. Oscript is not understanding the creation of a request handler,webnodeaction or eventscripting it is an all round knowledge of the product the passion about the product.You can immediately tell if the programmer knows anything because if they start saying about new subtypes you may want to ask them what prompted them to create a subtype, because many times those people have access to the oscript tutorials and the “Hello World” of oscript introduces them to that.You really want a Oscripter who will follow what OT advises them of doing in that case it is very rarely  a bad fit.
  3. Challenge if the pitched solution ends up working only in the webgui what about EC,what about SOAP,what about REST,what about Search

Oscript has years to go if the LL solution is still being used by customers.It can be learned to work wonders for you.The other aspect and full credit to Ossie here is LL software is the most open product much more open than opensource a oscripter can see the entire internals of it so people like me,John,Ossie all have found bugs and we all engage the good OT developers on our finds.Most senior and passionate oscripters continue to do so.There ‘s so much openness and welcoming in the OT programming community,people like David Templeton ,Kyle  Swidrowich and a countless amount of good people who have been striving to keep this open and getting communities to try it out for a better product.They have helped me and a lot of others and you can also get in into the action.

You just need the drive and passion to master LL that is all it takes.Well that maybe true for any technology 🙂

To Undelete or Not

Many times Administrators and users do things stupidly and want to cover their tracks quickly and without creating unwanted attention.Most people working with livelink know that the application is responsible for the data integrity and referential integrity.So to do any proper reversal you would need to script something back.But again here you are at a disadvantage.If you resort to standard OT support the maximum you are going to get is “you have an error that is not easily fixable and we would recommend you to fix it with our professional services team”.To a org or a small company they do not have that many resources or money to do that,hence the administrator or  a power user turned developer would try to see if something can be done.Lo and behold the KB is consulted and most members who would like to show off will post in their comments.Now that goes for me also because there isn’t a day that goes by when I cannot respond to something.However I am very judicious when I tamper with the livelink database.That is really because I understand oscript and I see the many myriad transactions that happen when it is working,doing upgrades and such.But in many organizations people are well versed in Live Reports and many now have the Web Reports product which is an excellent alternative to certain kinds of programming or utility programs .This is where the start of the problems happen.Say you accepted a bad advice like fixing a column or data point in a table that the “learned KB people” advised,you would not probably notice this way into months or years at part of a upgrade so what you did in earnest will land you in irrecoverable or costly difficulties.So RESIST the urge to change values in OT given tables.If it is your table and you are responsible for it do so by all means.There is no problem with you playing in a VM and understanding it but just don’t think that touching llattrblobdata or llattrdata is all it requires to do category updates 🙂

That doesn’t absolve OT as a company,it is expected to do its fair share.Somethings might be

  1. Perhaps with support paid the customer gets 10 free incident fixes. OT absorbs the development  cost and passes the solution to a would be  admin/dev in the organization.
  2. A list of compiled easy to use reports and utilities,perhaps web reports, perhaps a compiled java app that uses REST or WSAPI or even a utility oscript module.
  3. Some re-assurance from support to quell the panic and perhaps advise the customer to do another thing like tell them a deleted user/users necessarily need not be considered a panic situation.

 

I had a short stint working with Documentum and I did not know if the utilities it delivered were high priced like what OT does but it had a language called DQL (Documentum Query Language).So any guys who is knowledgeable on the schema could run reports and turn them into DQL commands which would basically honor data integrity and referential integrity for them.This is in a way WR does but it takes enormous patience and complexity to work with it which again would mean should I have developers supporting my application.I would advise WR people in OT to provide clear crisp examples 1 to 5 liners that should work with any livelink any schema,kind of a pre built library more like the canned Live Reports

 

I tested a user deletion today and tried to see what all tables would be affected so for KB users if you want to see it is here.hopefully I will find a cheap hosting provider to put my content and not in KB as I have been burned in my earlier attempts.

“Why it may not be a good idea to update livelink tables.docx” can be accessed via the following link: https://knowledge.opentext.com/knowledge/cs.dll/Properties/61862916

 

 

 

Demise of communities.opentext.com

RIP communities.opentext.com

Years of developer activity retired…. Just like that

How would a developer who does not have access to KB find anything informative?

Is that intentional?

Why wouldn’t  OT give out a fair warning to migrate content for content owners?

Much of code I kept for public consumption was here

So if anybody is looking for that write me I will try to find from local copies

 

 

General Help Series 4- Workflow Maps/Instances

Livelink Workflows for the impatient

Livelink comes up with a extremely powerful Workflow Engine .At the minimum if you have only installed Livelink OOB you get a decent workflow features.This involves a “business user” to design a logical map that may involve “players” or “roles”.The only criterion is the process needs to be repeatable or it can be plotted in visio or such like.Note Livelink WF came before the wf consortium came up with open standards but it will have almost everything one sees in the standard.

Some key terms-

  1. A WF Map.This is a dtree object identified by a dataid
  2. WF Manager-By default the person who created the original map.Best practices dictate that you assign a proper livelink group as the ‘Master Manager‘.Not to be confused as the livelink team’s manager or a organizational manager.In theory this manager can re-assign steps and repaint the map.One of the few places where the sysadmin profile or the ‘admin’ user won’t cut it so due diligence thinking that at some point a higher up user may be asked to assist.Please do not run maps with <Initiator /> as the Master Manager.It is quite possible these users know only to click and when something breaks nobody can intervene.You can make them do only the “See Details” which is more than enough in many cases.
  3. WF Instance- The process logic that is set in motion when a WF Map is set in motion a.k.a initiated .This is easily identifiable by a clickable link in the WF managers assignments and it will show in GREEN the step and useful info.
  4. Steps-The series of steps that can be assigned to processes and users.This is what apparently the business user can be trained to paint..there are a myriad of steps that OT puts on the palette.
  5. Role Based or Map Based-If you create a map OOB then livelink gives you a map where the forward steps can be done by business programming,like if WF attribute is green then my user is this group.If you change the map to “ROLE” based before initiation the Initiator will have to fill all the people in the workflow.People who use the Transmittals product of livelink can see this in action very clearly.
  6. Attachments Volume- By default the attachments package is permissioned Full Control to “Public Access” very bad idea as when a casual user does search he will see unwanted or secretive results..Always put a good permission bit there.If you do not give participants Add Items up to Delete in that volume you will get crude messages from livelink . Note that the Truth Table implementation is observed by livelink on any kind of Object Creation so to get around problems in workflow OT assigns default full control to PA. Just observe what OT gives and change that to a good group and add all the people in the workflow to that group.
  7. WF Attributes– Helper attributes modelled after category metadata .It allows the business user to route the map.
  8. Forms-Optional Package see NEXT post.
  9. Loop Back– Be extremely careful about this as you can create a infinite wf instance.
  10. Item Reference– You can point to existing object subtypes in livelink like folders /documents etc so in conjunction with Item Handlers you can perform “auto magic”

It is quite possible that a good business user or a livelink user can be trained over a day to understand the process flow/swim lanes.I usually design my maps on a paper.I have a cheat sheet of sorts I maintain and many of the things in this article is based on that. Always check “Verify Map Definition” to understand any problems this may have.

The Item Handler is a predictive step modeled by some placeholder logic( Design Book 3) although which it appear useful it could look frustrating as very little process automation can be done just by it.The workarounds or “auto magic”  is usually a human at a prior step.If you use it with XML WF Extensions a lot of “auto magic” can be done with it.

The “XML WF EXtensions”  is a optional module that   uses the capability of livelink objects to have a XML representation(everybody probably has heard of XML Export/XML Import)

the XML WF Ext uses XML representations of livelink objects and manipulation of those objects by a SAX(or DOM not sure I know LL has both in it ) parser. Usually people get bitten by load balancers and file system permissions.

Like wise -XML Work Flow Interchange.Can send /call Web Services of other systems.

ESign – A specialized workflow that can do electronic signatures prevalent in 21 CFR11 operations.

Perfectly suited for livelink organizations who will not invest in Oscript coding /scared by OT sales/marketing in not writing Oscript understanding of this remarkable product.It was very expensive when I started livelink programming so it is a personal bias as well since I like to look at the map and many times I can get a 1-1 representation to the process.

OT Workflow Design 1 and Design 2 is a must read if you are doing anything with workflows.it also has decent interface methods for web services  as well

Common Debug Protocol

****************************************** ***START UPDATE 10/3/2016*****************

LAPI and its client installer has become very hard to find. Moreover clients written in LAPI  in say Java/.NET will only work if your Livelink a.k.a Content Server is of version less than CS16.Readers who are new to LL programming is encouraged to read this to the approach and not to the exact lines of the code.What I mean is when you used to program in LAPI you were basically passing parameters to discrete calls by modelling it based on the webgui of livelink .SOAP based webservices called CWS is also the same,so if you do not try to do the task in the webgui and try to understand the business rules you will almost have no success in CWS too. OT is notorious for not putting fully functioning use cases and a walk through,so whenever possible I write code assuming the user has not worked in Livelink for X number of years and try to educate you all. Livelink,Content Server,Enterprise Server all of this has been Livelink’s marketing brand name changes over the years.CS i sused in many of the integrations like AGA, XECM, RMLINK  and you know you are programming against livelink if you see a link that looks like this  http(s)://somefriendlyURL/livelink.exe|cs.exe|llisapi.dll|cs.dll|livelink.In many places SAP/ SP /Exchange will be configured to talk to Archive Server and then they will use Livelink to read into archive server and turn that into LL objects for better presentment/RM and other aspects. The AGA product is moving away from LAPI(not sure totally or not) to REST API in LL.

**********************************************END UPDATE****************************

 

 

In most cases I have noticed that programmers brought into livelink a.k.a Content Server web server programming lack a general understanding of livelink and its business rules.The Vendor maintained links   is written almost to a starting novice but again people will run into problems.To mitigate this one has to have a simple protocol

  1. Ask the livelink team a representative container(folder ,project,compound documents,document work spaces all of them are shells or containers to hold additional objects) that the CWS programmer can access using the webgui.
  2. Ask how is authentication enforced.In almost 90% of places it will be SSO(NTLM,IWA,other LDAP methods)
  3. In most cases CWS application where you source the WSDL’s from will be “anonymous” because that is the level of documentation OT gives admins which means that the CWS user has to be manual user in livelink also known as a user with a password the livelink admin team gives you
  4. I have published several successful posts here  that allow me to do SSO(IWA/NTLM) that is because I know how to set it correctly.Most of it was looking at IWA webservices samples.
  5. I have installed and configured OTDS as well ,it is a re-directional(kerberos implementation) .Simply put when you access a link such as http://localhost/livelink/llisapi.dll if OTDS authentication is involved your call is redirected to the OTDS server(Tomcat in the version I tried).The kerberos token that is prevalent in your domain login is used to establish who you are against a configured LDAP appliance like AD and returned back to the livelink server.Anybody who knows enough about livelink URL’s know the web escaped URL’s, you should see the re-direction when using wireshark or fiddler as well.  This ideas are my interpretaion and not OT’s so take this with a piece of slat.
  6. The above OTDS authentication is overkill if Livelink was employed as a DMS or in your org people know that Livelink is there.It would make sense to tunnel xECM SAP Users your piggy backing on MYSAPSSO2   mostly integrations form other systems who want a seamless experienec.Altogether it is very easy to setup and implement save for the fact that creating SSO against AD is basically a one click operation.

Allright so what is the post about

If you are a .NET CWS/EWS programmer you will basically be sending this token with each service client call because livelink is http and state based and you never know if  the request is being handled by the same server or the same thread.

You would see .NET code like this

DocumentManagement.OTAuthentication dmOTAuth = new DocumentManagement.OTAuthentication();
ContentService.OTAuthentication     csOTAuth = new ContentService.OTAuthentication();
SearchService.OTAuthentication      ssOTAuth = new SearchService.OTAuthentication();

and things like

string token = authclient.AuthenticateUser(username, password);
if (token != null)
{
ssOTAuth.AuthenticationToken = token;
dmOTAuth.AuthenticationToken = token;
csOTAuth.AuthenticationToken = token;

}

contextID = dm.GetVersionContentsContext(ref dmOTAuth, dataID, 0);

the above is akin to a logged in user trying to download.the ref keyword is modern .net above 2.5 CLR or higher

If you are a JAVA CWS/EWS

In Java, however, you don’t need to do this and instead you set authentication information into the object representing the service. In your code you do this

setSoapHeader((WSBindingProvider) search, otAuth);

There are many more subtle differences with Java and .NET but for the most part it should work cleanly

If one were interested in debugging at the livelink server

  • Method 1: Login to the livelink URL that usually ends in /llisapi.dll or livelink.exe or livelink.Do you see an authentication screen that looks like an HTML page and not one that looks like a windows auth prompt?.If you were seamlessly taken into the application then this “instance” of livelink server has its authentication set to some authentication.assuming the simplest mechanism it could be IWA also interchangeably known as NTLM,SSO etc etc.
  • Method 3-Put Fiddler and learn how to use it when you hit a webserver.Then you can actually see your WSAPI client code making requests in Action.Many  times they are going to indicate something.
  • Method 4: There is a livelink sysadmin request handler that goes like <LIVELINKURL>?func=admin.testargs.Do you see “REMOTE_USER” filled and a auth mechanism something resembling this  AUTH_TYPE =Negotiate REMOTE_USER =DOMAIN\USERID or userid@domain REQUEST_METHOD =GET Then chances are this livelink is protected by a auth method other than “anonymous” Most webservices samples written by OT pertains to simple anonymously authenticated livelink servers.Do you have SSO samples lying in the web or here.Sure search for Guy Pomerleaux or me for a few who have ventured with it in the forums .Just by trial and error of the Web.Config and App.Config I have suceeded.Why wouldnt OT put a official sample,guesses are different web server different ways of deploying livelink,too much work to officially support it although the OTDS mechanism is kind of a middleware to address that 🙂
  • Method 5:Search Debugging Livelink when people or programs search use the Livelink Search API to call it. While it looks integrated believe it or not the searching is a different software and it can be used for any general purpose searching.Read OT history as the first searching company in the world(Yahoo search Engine :).Sometimes it becomes difficult to create queries properly so it is very easy however I do it like this.Isolate a named server(some address that wont be like a LB URL).Find that server’s Opentext.ini and in the [options] area add wantSearchLogs=true.Then run a search from the GUI.In all likelihood for each search you will find a new txt file that will contain the query.Try to understand it and repurpose. Similar files will exist in the Admin server for prospector queries and intelligent classification queries.

Now I have given you many pointers that would lead to successful coding.However let it be known that a livelink can be configured in very many different ways such as an admin can put a expiration  of livelink cookie to a very low value,you could be bouncing off livelink servers ,your code may come from a redirected system such as SAP or SP lots of problems could occur if the architecture is not well understood by replying people like me or even you.I am not a OT employee but have worked enough to know certain things that could occur.So my simple request to you would be to as much as possible educate yourself and the forum about the same. Also try the simplest mechanisms of OT code first to iron out the difficulties.It should not be too  hard.