3 Quick Ways that TIC LogWatch Can Enhance Your NonStop Appilcation Logs

logwatch-monitor-183 TIC LogWatch

Are you looking for ways to make better use of your NonStop Application logs?

TIC LogWatch is a Guardian program that watches different log files, looks for error patterns, and generates alerts when an anomaly is detected.

Here are some quick out-of-the-box ways that TIC LogWatch can be used to enhance your NonStop logs. Continue reading

Proactive Measure to Alert Application Issues

dashboard_gauges

Drivers rely on their dashboard gauges, warning lights and alarms to keep them apprised of any potential issues – so shouldn’t the same theory apply for monitoring
your important business applications? Being forewarned at the first sign of trouble puts one in the best position to address the problem before further issues arise.
Like this guy who neglected to monitor his dashboard:

broken_car

It’s important to not only be warned of an application error – but also to be informed as soon as possible so corrective action may be taken in as timely a manner as possible. Paying attention to important information that the applications are writing to the logs is critical. Continue reading

“Help! My EMS is overloaded!”

Does this sound familiar? When your NonStop gets very busy, your EMS also
gets very busy. In fact, sometimes you may find that EMS processing consumes a
lot of your CPU resources just for processing the flood of error messages. Why
are there so many messages in EMS?

Does this remind you of your EMS?

EMS_mess
Many users dump EVERYTHING into EMS. The original intention of EMS is to allow
all the different errors to be analyzed and filtered in one place. But when everything
goes into this one pipe, the result is an overloaded, clogged pipe. When you dump
too much stuff into EMS:

  • It becomes difficult to find the error messages
  • It consumes a lot of CPU resource for EMS to file the messages
  • Operation tends to start ignoring messages in EMS console because they are too overwhelming

There is a better way –LogWatch

Logwatch Diagram

Instead of clogging up EMS, use LogWatch to monitor the different log files and work in conjunction with EMS.

LogWatch can monitor different files including:

  • Guardian files
  • OSS logs
  • VHS logs
  • Pathway logs
  • Third party logs, etc.

Lighten up the EMS load

Here is quick way to reduce EMS load: instead of routing your application errors to EMS, write them to disk logs.
Funnels

  • Use LogWatch to monitor these application log files for errors.
  • LogWatch is scalable – you can have different instances of LogWatch monitoring different things.
  • LogWatch is easy to set up – you can set one up in minutes, and it won’t interfere with other instances.
  • Have LogWatch route only the errors to EMS.

Perfect companion to Prognosis or MOMI
If you are using a performance monitoring tool like Prognosis or MOMI, you will find LogWatch will work with it very effectively.

  • Use LogWatch to monitor disk log files for errors.
  • Configure LogWatch to route a message to EMS with specific Message ID or text pattern.
  • Enable Prognosis or MOMI to pick up these specific messages from EMS to take corrective actions.

Take Away – “Prevention is better than cure”
More than many other IT folks, NonStop users understand and appreciate the importance of availability, the cornerstone of the platform. But applications do encounter errors, which could lead to stoppage. When that happens, it is important to recover from the failure as quickly as possible. Any extended down time due to an unavailable application translates to loss of revenue and users’ confidence. With some advanced planning and a good implementation plan for log monitoring, problems can be detected early and remedied promptly.

  • Analyze your logs – Where are the logs? What is written to the application logs? Take a look at some of the old logs and see what is going on in the environment.
  • Plan ahead – What are some of the log messages that require specific actions? What actions? Who should be responsible for actions?
  • Execute the plan – Start implementing a plan to monitor the key log files, and automate the log monitoring process with a tool like LogWatch.
moreinfo_icon LogWatch FAQ

Feedback please

Do you find this tutorial blog helpful? Let us know what you think, and how we can make it even better. Don’t forget, you can subscribe to our blogs (top right-hand corner of the home page) to get automatic email notification when a new blog is available.

448Phil Ly is the president and founder of TIC Software, a New York-based company specializing in software and services that integrate NonStop with the latest technologies, including Web Services, .NET and Java. Prior to founding TIC in 1983, Phil worked for Tandem Computer in technical support and software development.

Know Your OSS Logs Part 2 – Java Servlet and NS/JSP Logs

jsp-resized-600 logwatch-monitor-183 java_tomcat-logo-600

In my previous article (Know your OSS Logs Part 1), I discussed the importance of monitoring iTP Web Server logs. If you are running Java servlet or JSP applications with iTP Web Server on NonStop, it becomes even more imperative that you monitor their logs. Why? Because they are your only conduit into what’s happening in the execution environment. Is the application running correctly? Was there an environment issue? Did the application abend? All these and other useful information are kept in the logs.

In this article, I want to share with you some basic information on Java, Servlet and JSP logs. Again, all this information is already available in the HP documentation, and so I am going to give you the “Cliff Notes ” version here:

What to monitor?

servlet.out stdout. This is the default location for Servlet to write out APPLICATION messages. So, if there is any issue encountered by the applications, , e.g. Pathsend failures, or datafile security errors, etc., the messages are usually reported in this file.
servlet.err stderr. This is the location for Servlet to report errors encountered by the servlet.
JSP rollover logs
Logs related to the Servlet/JSP processing, and may contain many, many entries, which include the servlet container’s activities and status. This is a particularly difficult log to sift through, as there can be so much information in there. These logs are configured to “rollover” (create a new one) based on certain criteria, such as date or size.

servlet.out sample entry

$PM:inquiry-class failed with a server exception.
An error has occurred with the link to the server.; TS/MP error 904; File system error 201; serverclass name: $PM.inquiry-class

Note: The above entry shows that the application has failed on a Pathsend and logged this message

JSP rollover log sample entry

An error occurred at line: 36 in the jsp file: /aceviI.jsp
DataConversion cannot be resolved
33: // prepare the byte arrays
34: if (messageIn == null || messageIn.length() == 0) messageIn = ” “;
35: byte[] messageInBytes = new byte[messageIn.length()];
36: DataConversion.JavaStrToCobolStr(messageInBytes, messageIn, 0, messageIn.length(), “UTF8″);
37: byte[] messageOutBytes = new byte[maxReplyLength];

Note: The above entry shows that the application has encountered a run time environment error due to missing code.

Why monitor?

Monitoring these logs allows you to check the health of the Java Servlet or NS/JSP applications and to detect errors as soon as they occur.

  • Did your Servlet just abort?
  • Did your NS/JSP just encountered a Pathsend error?
  • What NS/JSP pages are being accessed?
  • How can you quickly find the ERROR in your logs, among all those INFO and WARNING entires?
  • Which line of code in your Java Servlet has a problem?

Where are the logs?

The locations of these log files are specified in the server configuration file, and they usually reside in <NSJSP_HOME>/logs where <NSJSP_HOME> is /usr/tandem/webserver/servlet_jsp/ or /usr/tandem/webserver/servlets/

serlvet.out /usr/tandem/webserver/servlets/logs/servlet.out
servlet.err /usr/tandem/webserver/servlets/logs/servlet.err
JSP logs /usr/tandem/webserver/servlets/logs/servlets.2012-08-02.log (rollover by date)

Who should look at them?

The Operation team members are the people that monitor these log files as it allows them to check the health of the system. However,the log entries are very important to Developers during development or QA phases as well, as the logs will help them quickly pinpoint the locations of code issues.

If there are errors, then Development may be contacted to look at them.

servlet.out Operation, Development
servlet.err Development
JSP logs Administrator, Development

How to review the logs?

If you do it manually, it can be quite a daunting task to look through all the different entries to identify what you are looking for. While you could use cat, tail and vi to review them, realistically, you would be better off downloading your files to your desktop computer and using a desktop tool to sift through the file. But the best way is to automate with LogWatch.

automate

Automate with LogWatch!

Instead of having to manually view these files in OSS directly, you can automate by using TIC LogWatch to:

      • Look for ERROR entries in the servlet logs
      • Look for any “thrown exception” messages
      • Extract the key information from the error messages and raise an alert email or EMS message to notify Operations or Development
      • Clone a copy of the entries to a Guardian file
      • All these and more can be done automatically with LogWatch

moreinfo_icon

Feedback please

Do you find this tutorial blog helpful? Let us know what you think, and how we can make it even better. Don’t forget, you can subscribe to our blogs (top right-hand corner of this page) to get automatic email notification when a new blog is available.

Phil LyPhil Ly is the president and founder of TIC Software, a New York-based company specializing in software and services that integrate NonStop with the latest technologies, including Web Services, .NET and Java.
Prior to founding TIC in 1983, Phil worked for Tandem Computer in technical support and software development.

Know Your OSS Logs Part 1 – iTP Web Server Logs

 whowhatwhywherewhenhow

In this article, I want to share with you some basic information on some important iTP Web Server logs. Since a lot of information is already available in the HP documentation, I thought I would give  you a “Cliff Notes ” tour by using “5 W’s + 1 H” approach:

What?

access.log The access log file records the request history of a server. The information in this file is structured in the common log format (CLF).
httpd.log The extended log file combines the functions of the access log and the error log files, recording information concerning requests and errors. This format places errors in context with the relevant request
error.log The error log file records all request and server errors. The information in this file is
structured in the common log format (CLF).

Why?

By monitoring these logs, it allow you to gauge the health of the webserver and to detect errors as soon as they occur.

  • Are Web requests coming in?
  • Are they completing with code 200 (normal) or errors like 404 or 500?
  • What pages are being accessed?
  • Where are the requests coming in from?
  • Are there any errors?

Where?

The locations of these log files are specified by the ErrorLog directive in the server
configuration file. These are their common locations

access.log /usr/tandem/webserver/logs/access.log
httpd.log /usr/tandem/webserver/logs/httpd.log
error.log /usr/tandem/webserver/logs/error.log

 Who?

The Operation team are usually the people that monitor these log files as it allows them to gauge the health of the system. If there are errors, then Development may be contacted to look at them.

access.log Operation, Administrator
httpd.log Administrator, Development
error.log Administrator, Development

How?

If you do it manually, then you should remember cat, tail and vi. For example:

  • To view the whole log file (equivalent to FUP COPY)
    cat access.log 
  • To view the last entries:
    tail access.log
  • You can also view them using the vi editor
    vi access.log

The Take-Away describe the image

iTP Web Server records its web activities in log files. By monitoring these log files, you can be assured that transactions are flowing normally, or if there is an error condition that needs to be addressed.

 Automate! Automate

Instead of having to manually view these files in OSS directly, you can automate by using TIC LogWatch to:

  • Monitor the files for new entries
  • Clone a copy of the entries to a Guardian file
  • Scan the entry content to look for specific information, e.g. any HTTP completion code that is other than “200”
  • Raise an alert by sending an email to an Administrator or Developer based on the detected condition

Click here to learn more about how LogWatch can help you monitor iTP Web Server logs.

Next Topic: Know your OSS logs – Java Servlet and NS/JSP

Many iTP Web Serve applications are actually built on Java Servlet and Java Server Pages. This environment involves additional log files beyond the ones discussed here. I will discuss the usage of these log files in my next blog.

Feedback please

Do you find this tutorial blog helpful?  Let us know what you think, and how we can make it even better. Don’t forget, you can subscribe to our blogs (top right-hand corner of this page) to get automatic email notification when a new blog is available.

Phil LyPhil Ly is the president and founder of TIC Software, a New York-based company specializing in software and services that integrate NonStop with the latest technologies, including Web Services, .NET and Java.
Prior to founding TIC in 1983, Phil worked for Tandem Computer in technical support and software development.

Make Your Logs More Useful: Use LogWatch!

information_overload-resized-600

The Challenge: Log information overload

Any Operations Support person can tell you, one of the challenges of the job is making use of logfiles to help keep abreast of potential problems.

However, if you have ever sat in front of a NonStop Viewpoint or EMS console display, you would know that the amount of messages that scroll by is impossible for one to read, let alone analyze.

Or open up any application disk log file, and I can bet you that there are tons of messages in there that no one knows what they mean. Except maybe the person who wrote the program. See if this scenario sounds familiar:

Operation: “I just saw this message in the log. What should I do with it?

Developer: “Oh, don’t worry about it. It’s just a debugging statement.”

Operation: “What about that one?”

Developer: “You can ignore that one too. It’s just an informational statement.”

person_with_idea

(Wrong) Conclusion
Log messages are not important!

Making Log Messages Meaningful

The major difficulty that arises is that logfiles serve purposes that demand two somewhat conflicting sets of requirements:

  1. Logs must be as detailed as possible, to help Development find very specific information about why and how something has happened.
  2. Logs must be understandable and simple enough to facilitate the Operation’s and Technical Support’s job of actually making sense of them to respond in a timely manner

There is a better way – LogWatch

TIC’s Logwatch utility is designed to make a Support person’s job easier by doing some basic analyzing and display formatting for a wide range of logfile types. Logwatch can be easily configured to parse through your system’s OSS and Guardian logs to detect errors and raise alerts.Logwatch is easy to use and will work right out of the package to help you monitor OSS log files such as iTP Web Server logs and Java logs, or Guardian applications logs or EMS and VHS.

LogWatch allows you define what messages you are interested in, and filters out the rest for you.

logwatch_filters-resized-600

Below is an example of how LogWatch can look for specific entries, e.g. [ERROR] in a very busy log file.

logwatch_finds_error

LogWatch can further extract relevant parts from the message, and add text to make it more meaningful.

2012-07-03 07:03:24 JAVA PROGRAM ERROR: getParamValue – SVBH-DCI-HEADER Contact Development

The above line consists of EXTRACTED entries from the log message, plus ADDITONAL TEXT (underlined) added by LogWatch to make the log message less cluttered and more meaningful.

takeaway_icon

  • Keep complete and detailed logs
  • Plan ahead, and use filtering to minimize clutter when searching through logs for relevant information.
  • Let LogWatch automate your Nonstop log monitoring without over burdening your EMS logs

Other Blogs you may be interested in

LearnLogWatch

DOWNLOAD

Feedback please

Do you find this tutorial blog helpful? Let us know what you think, and how we can make it even better. Don’t forget, you can subscribe to our blogs (top right-hand corner of this page) to get automatic email notification when a new blog is available.

Do you find this tutorial blog helpful? Let us know what you think, and how we can make it even better. Don’t forget, you can subscribe to our blogs (top right-hand corner of this page) to get automatic email notification when a new blog is available.

Phil LyPhil Ly is the president and founder of TIC Software, a New York-based company specializing in software and services that integrate NonStop with the latest technologies, including Web Services, .NET and Java. Prior to founding TIC in 1983, Phil worked for Tandem Computer in technical support and software development.


Who Is Minding Your Logs?

monitor-resized-600

I have always said that you can tell who the Nonstop Users are in the audience by the questions they ask, such as:

  • Is this scalable?
  • What is the performance?
  • How do I monitor it?

The last question highlights the importance that NonStop system managers place on providing responsive support for their end users.
That’s why system performance monitoring tools like Prognosis, MOMI and others are very popular at Nonstop installations. On the other hand, monitoring logs on the NonStop doesn’t get a lot of attention beyond the use of EMS or VHS. Yet it is an important topic that is worth a closer look. How would you answer the questions below?

When do you look at your logs?

alarm-resized-600“When there is a problem” seems to be a common response. Surprised? Not really. Given how overloaded everyone is, there is very little time to monitor logs. Unfortunately, “When there is a problem” usually translates to  “User calls up and reports a problem.” Through routine monitoring of logs, problems can be detected and fixed before they affect the users.

Do you know where your logs are?

assess-resized-600
“Well… not all of them.”

When I speak of logs, most users think of system logs like VHS and EMS logs. Think again. Realistically, on a typical system, there are many other logs,
such as:

  • Application logs – entries written out by COBOL, TAL, C, C++ and other applications.
  • OSS logs – Are your running iTP Web server, Java, NonStop SOAP, etc? There are lots of good information in the OSS logs.
  • Subsystem logs – Some subsystems have their own logs.
  • 3rd-party application logs – If you are running a 3rd-party application, e.g. Websphere MQ, chances are it has its own logs

So if you don’t know where your logs are, trying to hunt them down to resolve a problem can be time-consuming and detrimental to your service level agreement (SLA) with your end-users.

Are you overloading your logs?

overloaded-resized-600Some users try to alleviate this problem by routing EVERYTHING to EMS or VHS. Unfortunately, that creates another problem: log overload.
When that happens, basically Operations staff stop looking at the hundred of log messages that fly off the screen. Yes, EMS has a very nice filtering capability that can be configured to filter out different types of messages. But the reality is that not everyone is up to the task of configuring filters, and certainly not for tens if not hundreds of different message types that flood the EMS log.

“What’s in your wallet?”

wallet-resized-600I meant.. logs. If my guess is correct, chances are your logs contain more than just error messages. They probably have warnings, statistics and “stuff programmers write to log for their own use. ” That’s another key reason a lot of people do not bother looking at logs: the content overwhelms them. See if this conversation sounds familiar:

Operator: “I just saw this message in the log. What should I do with it?”
Answer: “Oh, don’t worry about it. It’s a debugging statement.”
Operator: “What about that one?”
Answer: “You can ignore that one, too.”

The result is a misinterpretation: no need to look at log messages unless someone reports a problem.

takeaway_icon Monitor your logs

  • It can help you improve the quality of your end-user service.
  • Know your logsKnow what is in them and where they are. There’s usually a lot of useful information in those logs besides error messages.
  • Have a log monitoring strategyTake control and plan how you want to use the log information.
  • Automate as much as possibleHave a procedure in place and automate it whenever possible.

Next Topic: Know your OSS logs

As more users are starting to use OSS in one form or another, such as iTP Web Server, Java or SQL/MX, it becomes more important to pay attention to some of the logs that reside in the OSS space. In my next article, I will focus on these OSS logs.

Feedback please

Do you find this tutorial blog helpful? Let us know what you think, and how we can make it even better. Don’t forget, you can subscribe to our blogs (top right-hand corner of this page) to get automatic email notification when a new blog is available.

Phil LyPhil Ly is the president and founder of TIC Software, a New York-based company specializing in software and services that integrate NonStop with the latest technologies, including Web Services, .NET and Java. Prior to founding TIC in 1983, Phil worked for Tandem Computer in technical support and software development.

 

Desktop Development for NonStop Introduction

Many years ago I switched from editing and compiling programs on a NonStop server to using Windows™ desktop tools.  Once those tools were available, I found it both faster and much easier.  Using EDIT, TEDIT or VI when I was used to editing on my PC was such a pain.  At the time I started using the desktop cross-compilers, compiling on a NonStop was so much slower (Cyclone, EXP and K systems back then.)  And, although the newer S and now Itanium systems are so much faster, cross-compiling still makes lots of sense.

tandem_label

Some of the benefits I’ve found of using the desktop cross-compilers:

  • Compilation is much faster and errors jump to the location in the source file with just a click of the mouse.  Compiling of very large source files complete in seconds.
  • Color-keying of reserved words, auto-tabbing and other “intelligent” editor features, like Intellisense (the editor anticipating what you are typing), reduces errors and speeds development.
  • Compiler/linker settings are done with dialog boxes which are self-explanatory.
  • Shared directories of libraries and code including source control are readily available.
  • Debugging using a GUI tool (Visual Inspect) is a huge benefit, especially now that eInspect is so different from inspect.  Visual Inspect works with K, S and Itanium systems and looks just the same for all of them.
  • Moving files back and forth to the NonStop using a GUI ftp client is very easy and fast.  (I’ve been using WS FTP Pro for years.)
  • Programs can be developed for Guardian or OSS and libraries, DLLs can also be created.
  • NMCobol, pTal and C/C++ languages can be used.

My first experience with desktop cross-compiling was using TDS (Tandem Development Suite.)  (Integrating the cross-compilers with Borland C++ was one of the projects I had worked on at Tandem back then.)   I was developing Server Object Gateway (SOG) at the time.  Using TDS on my desktop I was able to develop my portions of SOG at a pace orders of magnitude faster than doing it just on the NonStop server.

TDS was built on Borland’s C++ development suite.  Tandem had talked to Microsoft about integrating with Visual C++, version 5 at the time, but they weren’t interested.  Several years later, when Visual Studio .NET came out, the cross-compliers were integrated with VS.NET and ETK was born.  I use both ETK and TDS since we have to build some of our applications to work on K systems and ETK doesn’t support D48.

HP has decided to migrate from VS.NET to Eclipse.  I’ve done a bit of testing but still use ETK and TDS.  Perhaps I’ll give some comparisons in a future blog.

In my next blog on Desktop Cross-Compiling, I’ll show some examples and share some tips.

Feedback please

Do you find this tutorial blog helpful? Let us know what you think, and how we can make it even better. Don’t forget, you can subscribe to our blogs (top right-hand corner of this page) to get automatic email notification when a new blog is available.

dsrindividualDonald Wickham has 31 years experience with Nonstop including 20 years for Tandem, Compaq and HP. He has been with TIC Software for 11 years in the role of Chief Architect.