A Primer on reading filter and workflow logs
Page 1 of 1
A Primer on reading filter and workflow logs
Diagnosing many differnet kinds of application problems, questions, and concerns begins with recreating the
behavior with the appropriate logs enabled and sending them to technical support. The reason this is such a
common step in the troubleshooting or investigative process, is that it is deterministic - it tells you what
happened, when, and with what data. Even more importantly, this diagnostic approach can localize a problem
caused by customization, providing insight into functionality which was done by a third party, a consultant, or a
departed administrator leaving no documentation behind.
But creating logs is not a panacea. There are definate situations when it is more appropriate, things you can do
to make the logs more easy to decipher, and tricks to finding what you need a bit quicker. And in the case of
customization, logging may tell you what happened, without revealing how the intended functionality is meant to
work, and why it differs in some situations. Nor does taking logs definitively correct the problem in a single step
- it may localize a particular problem to a few workflow items you must look at in Remedy Administrator, or it
may just suggest the next step in troubleshooting by localizing or ruling out potential causes of the behavior.
The paragraphs below are meant to share some of the history, science, and art of reading filter and workflow
logs, and how they are used by technical support to diagnose issues. The topics covered include:
- A brief geneology of logs
- Different kinds of logs, and when they are useful
- The basics of reading logs
- Sequence of operations
- Choosing the right text editor to examine logs
- Things to do to make logs more useful
- Using logs to diagnose a few common problems and questions
A brief geneology of logs
The diagnostics available in the Action Request System have evolved over time, as the features of the product
have grown, and as requested by Technical Support to drive quicker resolution to problems. Server side logging,
including Filter, SQL and API logging have existed since very early versions. Client-side logging in Remedy User
to record the activity of Active Links and Macros was introduced in version 3.2, see KB 6084 on how to enable
these logs. Beginning with Action Request System 4.0, these active link and macro logging could be enabled by
selecting an option directly from Remedy User (Choose Tools a Options, Advanced Tab, and enable the Workflow
Logging).
This macro logging was fairly limited, in the sense that it would not report values passed to macros, an omission
which was corrected in Remedy User version 4.03 and later. Beginning with Remedy User version 4.5, there was
also introduced a method to record server-side activity in client side logging, thereby reducing the amount of
information recorded to the logs which, as is discussed later, is quite helpful in reading log files. (This feature of
Remedy User 4.5 is described in the ARS Server Administrator's Guide Ch 2).
Remedy client side API logging was availabe in Remedy User 4.01, as documented in KB 3456, but was
subsumed by the Remedy User 4.5x client trace modes (enabling API logging in Remedy User 4.5x)..
On the server side logging capabilities, the Action Request System version 4.5 introduced a more scalable
implementation of server side processing using threads, and with it, logging to track activity of multiple threads.
Prior to this, server logs (filter, API, SQL, etc) would write to different log files with a file extension indicating
which server process handled the activity. For example, after enabling filter logging, you may see several files on
your server: arfilter.log.390621, arfilter.log, arfilter.log.390635, and have to search all of them for the desired
log segment.
As of this writing, ARS 4.52 is the latest version, but later versions may include additional logging features where
appropriate, to facilitate diagnosizing new functionality. Since logging capabilities have changed, an occasional
request from tech support is to use a particular version of Remedy User to enable logging and reproduce the
behavior, as it gives better diagnostic information.
Different kinds of logs, and when they are useful
The most common logs requested are Filter and Workflow (Active Link/Macro) logging. There is a fair amount of
overlap of functionality which can be implemented by Active Links and/or Filters, so a common request is enable
both logs (on the client side, if on Remedy User 4.5) at the same time, and reproduce the behavior so both can
be examined. The most common symptoms reported are either an error message or a field whose value is
changed unexpectedly, and both of these (a Message action, a Set Fields/Push Fields action, respectively) can be
implemented by either an Active Link or Filter. A Run Process action can also be performed via either an Active
Link or Filter, so there are several cases where these logs are appropriate. Compare the actions available when
creating a new Filter, vs a new Active Link, and in all of these cases, it'd be appropriate to make concurrent
Workflow/Filter logs.
Similarly, escalations can perform these actions, Set Fields, Run Process, etc, but these occur at a time
independent of user activity, and the errors appear in the Arerror.log instead of being presented to the user. In
these cases, an escalation log would be appropriate, typically running the log long enough for the error or
symptoms to occur, and then disabling the log.
SQL logs record the SQL commands sent from the Action Request System Server to the database server, but
does not include the results set returned from the database server. All of the data in forms, and the definitions of
the forms and workflow objects themselves, are stored in the database so SQL logging generates a great deal of
low level information. Interpretting SQL logs requires greater experience, both with the SQL language and
familiarity with the Remedy data dictionary. The Remedy data dictionary is documented in the later chapters of
the "Server Administrator's Guide" (ARS version 4.5), or the "Remedy Administrator's Guide Volume 2" (ARS
version 4.0x) but it's not expected that customers are familiar with this low level information. SQL logging may
be requested by Technical Support in situations of data corruption, and to troubleshoot searches which return
unexpected results, or just sometimes when it'd be useful to search for a particular value that would only appear
in the SQL log.
API logs show Remedy API calls made by a Remedy client (such as Remedy User, Remedy Administrator,
Remedy Import, or any API program such as RappSvc, BBExecd, etc) to the Action Request System Server.
Many but not all API calls have a subsequent SQL call to the database server. Exceptions include checking user
or form information which may be cached in memory on the Action Request System, and consequently not
require a call to the SQL server. The API calls are logged, but not all the parameters to the API call. API logs
record low-level information which is not expected to be interpretted by customers, except in cases where they
write API programs. The Remedy Programmers Reference describes the Remedy API calls. Most commonly, API
logs will only be requested when written to the same file as the SQL log, so the SQL calls can be correlated to an
API call.
User logging records when users login and logout from the server. This may be useful to troubleshooot cases
where the number of logged in users is dfferent from that expected, but is mostly used independent of the other
logs.
Thread logging is new in ARS 4.5x, to track thread activity. It is used primarily to investigate performance issues
and the load balancing behavior of the ARS Server.
The basics of reading logs
There is a unique prefix at the beginning of each line, indicating what workflow log the line is from. This is
particularly helpful when you enable multiple logs to a single file.
- Filter log
- API log
- SQL log
- Workflow log (ARS 4.5x, Active Links and macros)
- Active Links
- Macro logging
Server side logs also include a server timestamp and login name of the user initiating the action on each line.
Most of the information that will concern you, however, is further to the right so scroll right to see the workflow
activity.
All logs begin with a line indicate the log has begun, and end with a line indicating it has been disabled. The
looks something like the following:
/* Thu May 17 2001 10:52:03.9740 Demo */
Filter Trace Log -- ON
à.
/* Thu May 17 2001 10:52:03.9840 Demo */
Filter Trace Log û OFF
After this, they will include a line indicating the start of some operation. For example, if a user was to create a
new request, there would be a line indicating the beginning of this operation:
Start filter processing -- Operation û CREATE
or
Start active link processing -- Operation - On Window Open
And a line indicating the end of that operation:
Stop filter processing
or
Stop active link processing - On Window Open
Note that in the Active Link/Workflow logs, the end line gives an indication of which operation is ended, whereas
the server side logs just indicate the end of the current operation.
Immediately following the line beginning the operation are two or three lines with additional information about
the operation. Below are three examples. The first is from a filter operation, selecting request # 7 from the AR
4.5 Sampler form. The second is an active link log segment, recorded when opening the base_ReportTemplate
form in a "Create a New" window. The third is another active link log, but on the operation of selecting an item
from a character menu. This sniplet tells us the name of the field with the attached menu, and indicates the menu is accessed when the form is open in a Create a New window. As we'll discuss below, information such as
the form name, user name, and the Request ID will help us find the transaction of interest, by keyword
searching for these values in the log.
Start filter processing -- Operation - GET
AR 4.5 Sampler û 000000000000007
Start active link processing -- Operation - On Window Open
For Schema - base_ReportTemplate
On screen type û CREATE
Start active link processing -- Operation - On Menu Choice
For Schema - base_ReportTemplate
Linked to field - Primary Audience (600001643)
On screen type - CREATE
In between the start and end line of the operation, you would see each workflow object defined to execute on
this operation is checked, in the sequence defined by their execution orders. In filter logs, you will also see filters
which are disabled, but in workflow (active link) logs, active links which are disabled are not reported in the logs.
For each workflow object checked, it will report PASSED if the Run If qualification is true, and will show the If
actions performed. If the Run If qualification is not met, the workflow will be reported as FAILED qualification,
and any Else actions will be performed, if any are defined.
Example, from filter log:
Checking CreateMessage (500)
Passed -- perform actions
And from workflow log:
Checking base_Report: Window Close (0)
Passed qualification -- perform if actions
There can be multiple If Actions, or Else Actions, or both, defined for a filter or active link. In the log, they will be
listed in the order defined in the workflow, with the first one being zero. Below, is an example of an Active Link
which performs three actions, a Set Fields action, a Push Fields action, and a Change Fields action.
Checking SeveralIFActions (0)
-> Passed qualification -- perform if actions
0: Set Fields
Assigned To (4) = Demo
Status (7) = 0
1: Push Fields
To Schema User on Server @
2: Set Characteristics
For field -- Short Description (
Change field to read-only
Two important things to note about the log example above. First, it shows the actual values set, in the Set Fields
action. This is information you cannot get by looking at the Active Link itself, since the value might be set from a
user entered field, or from a different record than expected.
Second note: the partial log above does not tell you from where the data was pulled, or how exactly the Set Fields action is defined. The Set Fields action may have been hard-coded to the value Demo, or it could have
been defined to be populated from a keyword, a user defined field, or a field on another form. It also doesn't tell
you if a pick list was presented to the user, if multiple entries matched, and the user selected one with a value of
Demo.
Thus, making appropriate log files does not entirely replace looking at the workflow definitions, but it does help
direct your investigation by recording what happened and with what data.
Sequence of operations
Below is an example of a filter log, performing a filter which has several If Actions. Filter logs are more involved
than Active Link logs, because filter actions occur in multiple phases. (See ARS 4.5 Server Administrator's Guide,
Ch 5, for more information.) In the example, you can see that the Push Fields action is defined first in the filter,
but the action is actually performed after the Set Fields and Message actions.
Checking FilterWithSeveralActions (500)
--> Passed -- perform actions
0: Push Fields
1: Set Fields
Assigned To (4) = Myself
2: Message
Your request has been processed
/* Sun Jun 10 2001 15:14:51.8000 */ End of filter processing (phase 1)
/* Sun Jun 10 2001 15:14:51.8000 */ Restart of filter processing (phase 2)
/* Sun Jun 10 2001 15:14:51.8000 */ 0: Push Fields
Email Address (103) = youraddress@yourcompany.com
/* Sun Jun 10 2001 15:14:51.8000 */ Start filter processing -- Operation - SET
User - 000000000000001
The example above is simple, because there is just one filter defined on this form. In real world situations, the
first instance of "0: Push Fields" and the second instance of it, where the action is performed in Phase 2, may be
separated by several pages of other filters executing, and in Phase 2, there may be several deferred Push Fields
actions.
There are a few tricks you can use to make examining the log easier. First, note the line:
when the Push Fields action is performed. Another thing you can seach for is
0: Push Fields
though this may instead find another filter that has a Push Fields action defined in it.
Choosing the right Text Editor to view log files
All of the log files described above are text-based, and you can use any text editor to view them.
With experience, you'll find there are several features in a text editor that are useful in viewing log files:
Enable/disable word wrap - The server side logs in particular are very long, so if you have word wrap selected in
your viewer it may be hard to see what is going on in the log, as a single line will wrap in your window to be two
or three. Typically, it's preferable to disable word wrap and scroll to the right to see the other information. The
ability to specify a smaller font may also be useful, so you can fit more per line. Keyword search both forward and backward - Notepad allows you easily search up or down in the file from your
current field selection, as does MS Word. Wordpad only allows you to search down.
Handling of LF characters - Log files generated on a UNIX server have line feed characters, rather than CRLF
characters, to separate lines. If you copy that log file to a Windows system and view it in Notepad, the carriage
return is not added, and the lines are not separated. Viewing the file in Wordpad makes it easier to read. If you
copy/paste it into MS Word, and then copy/paste it from Word, the CR characters are added so you can "convert"
it to a format easily viewed in other text editors.
How it handles the file in memory - when viewing huge log files, you may want to use Wordpad, or an editor
which handles memory effectively, rather than opening the whole file in memory.
Bookmarks and color coding - Programmers text editors often include the ability to add bookmarks or define
strings to be displayed in a different color. This may be helpful when viewing large log files, by having Start and
End operations in a particular color.
Things to do to make logs more helpful
When making logs to investigate problems or scope changes, there are several things you can do to make them
easier to examine. One is to keep them small, running the logs only long enough to capture the targeted
behavior. Server side logs can become large in a short period of time, so enabling logs after hours or at a time
when other user activity is minimized is preferable. The new ability to enable server-side logging from Remedy
User 4.5 is a big help in this, allowing you to limit server logs to activity generated by a single user. When doing
this, remember to leave the logs on long enough to capture the behavior, confirming its completion before
disabling the logs. A new tool available with ARS 4.5x allows you to look at log files in real time, to facilitate this.
See "Server Administrator's Guide page 2-19 for a description of using arlogdisplay.exe.
Second, make a note of the request ID, the login name of the user, and the steps performed when the logs are
recording. This will make it much easier to search for the relevant part of the logs.
On ARS Server versions prior to 4.5, separate log files are generated for each server process, using the RPC
numbers. For example, enabling filter logging on a server with MPSO (multiple fast and list servers enabled),
may generate several log files in the same location named: arfilter.log, arfilter.log.390621, arfilter.log.390635,
etc. Remember to search/send all of these log files, as the action may be distributed between multiple server
processes, so examining all the log files is necessary to verify no information is lost. When convenient, you may
want to temporarily disable MPSO (disable all fast and list servers) before making these server side logs on
servers earlier than version 4.5. Typically, that is only practical on test servers.
Finally, make a note of the behavior which you observe. Error messages and text may be helpful in searching the
log. If there is a difference in behavior in two different situations, such as different users or different
workstations, make two sets of logs, and note which went with which scenario. This is information which is easy
to capture at the time that you are making logs, but difficult to remember afterwards, so it's good practice to jot
notes as you make the logs.
Using logs to diagnose a few common problems and questions
A few common scenarios where log files are useful to diagnose or investigate functionality include:
Scenario 1: A field is being set to a particular value on submit, but not sure why, or from where the value is
being pulled.
Using logs: If the update to the field appears as soon as the request is saved, it may be a filter or an active link
which makes the update, but most likely not an escalation. (if the change only appears several minutes later,
then it very likely could be an escalation.). To investigate: Enable both filter and Active Link logs, and recreate the behavior. In searching the logs, you may search for the field name updated, the field id, or the value.
Remember that selection lists or option fields will show the value as the integer position. For example, it'll be set
to 0 for the first value.
Scenario 2: An error or warning message appears when you try to make a certain modification to a request,
indicating you must update some other field. You want to determine the source of this error, and may want to
disable it.
Using logs: The message is given by a Message action. As with the previous example, it can be given by a filter,
active link, or escalation, and the most likely candidates are the first two. Enable both filter and active link
logging, recreate the behavior, and search the logs for either the error message, or ": Message" (that is "colon
space Message").
Scenario 3: You want to customize or mimic the behavior of the application in one place, and want to know what
is involved.
Using logs: Since filter and active link logs show you the workflow defined on the operation, even if it does not
pass qualification, it is a more comprehensive view of what functionality may be involved in your change. If you
enable logs and see that there is significant workflow behind the operation, you can better determine if the
change is a simple one, or more of a project.
Ref:
See the "Server Administrator's Guide, Action Request System 4.5" Chapter 2 for additional information and
example of log files.
behavior with the appropriate logs enabled and sending them to technical support. The reason this is such a
common step in the troubleshooting or investigative process, is that it is deterministic - it tells you what
happened, when, and with what data. Even more importantly, this diagnostic approach can localize a problem
caused by customization, providing insight into functionality which was done by a third party, a consultant, or a
departed administrator leaving no documentation behind.
But creating logs is not a panacea. There are definate situations when it is more appropriate, things you can do
to make the logs more easy to decipher, and tricks to finding what you need a bit quicker. And in the case of
customization, logging may tell you what happened, without revealing how the intended functionality is meant to
work, and why it differs in some situations. Nor does taking logs definitively correct the problem in a single step
- it may localize a particular problem to a few workflow items you must look at in Remedy Administrator, or it
may just suggest the next step in troubleshooting by localizing or ruling out potential causes of the behavior.
The paragraphs below are meant to share some of the history, science, and art of reading filter and workflow
logs, and how they are used by technical support to diagnose issues. The topics covered include:
- A brief geneology of logs
- Different kinds of logs, and when they are useful
- The basics of reading logs
- Sequence of operations
- Choosing the right text editor to examine logs
- Things to do to make logs more useful
- Using logs to diagnose a few common problems and questions
A brief geneology of logs
The diagnostics available in the Action Request System have evolved over time, as the features of the product
have grown, and as requested by Technical Support to drive quicker resolution to problems. Server side logging,
including Filter, SQL and API logging have existed since very early versions. Client-side logging in Remedy User
to record the activity of Active Links and Macros was introduced in version 3.2, see KB 6084 on how to enable
these logs. Beginning with Action Request System 4.0, these active link and macro logging could be enabled by
selecting an option directly from Remedy User (Choose Tools a Options, Advanced Tab, and enable the Workflow
Logging).
This macro logging was fairly limited, in the sense that it would not report values passed to macros, an omission
which was corrected in Remedy User version 4.03 and later. Beginning with Remedy User version 4.5, there was
also introduced a method to record server-side activity in client side logging, thereby reducing the amount of
information recorded to the logs which, as is discussed later, is quite helpful in reading log files. (This feature of
Remedy User 4.5 is described in the ARS Server Administrator's Guide Ch 2).
Remedy client side API logging was availabe in Remedy User 4.01, as documented in KB 3456, but was
subsumed by the Remedy User 4.5x client trace modes (enabling API logging in Remedy User 4.5x)..
On the server side logging capabilities, the Action Request System version 4.5 introduced a more scalable
implementation of server side processing using threads, and with it, logging to track activity of multiple threads.
Prior to this, server logs (filter, API, SQL, etc) would write to different log files with a file extension indicating
which server process handled the activity. For example, after enabling filter logging, you may see several files on
your server: arfilter.log.390621, arfilter.log, arfilter.log.390635, and have to search all of them for the desired
log segment.
As of this writing, ARS 4.52 is the latest version, but later versions may include additional logging features where
appropriate, to facilitate diagnosizing new functionality. Since logging capabilities have changed, an occasional
request from tech support is to use a particular version of Remedy User to enable logging and reproduce the
behavior, as it gives better diagnostic information.
Different kinds of logs, and when they are useful
The most common logs requested are Filter and Workflow (Active Link/Macro) logging. There is a fair amount of
overlap of functionality which can be implemented by Active Links and/or Filters, so a common request is enable
both logs (on the client side, if on Remedy User 4.5) at the same time, and reproduce the behavior so both can
be examined. The most common symptoms reported are either an error message or a field whose value is
changed unexpectedly, and both of these (a Message action, a Set Fields/Push Fields action, respectively) can be
implemented by either an Active Link or Filter. A Run Process action can also be performed via either an Active
Link or Filter, so there are several cases where these logs are appropriate. Compare the actions available when
creating a new Filter, vs a new Active Link, and in all of these cases, it'd be appropriate to make concurrent
Workflow/Filter logs.
Similarly, escalations can perform these actions, Set Fields, Run Process, etc, but these occur at a time
independent of user activity, and the errors appear in the Arerror.log instead of being presented to the user. In
these cases, an escalation log would be appropriate, typically running the log long enough for the error or
symptoms to occur, and then disabling the log.
SQL logs record the SQL commands sent from the Action Request System Server to the database server, but
does not include the results set returned from the database server. All of the data in forms, and the definitions of
the forms and workflow objects themselves, are stored in the database so SQL logging generates a great deal of
low level information. Interpretting SQL logs requires greater experience, both with the SQL language and
familiarity with the Remedy data dictionary. The Remedy data dictionary is documented in the later chapters of
the "Server Administrator's Guide" (ARS version 4.5), or the "Remedy Administrator's Guide Volume 2" (ARS
version 4.0x) but it's not expected that customers are familiar with this low level information. SQL logging may
be requested by Technical Support in situations of data corruption, and to troubleshoot searches which return
unexpected results, or just sometimes when it'd be useful to search for a particular value that would only appear
in the SQL log.
API logs show Remedy API calls made by a Remedy client (such as Remedy User, Remedy Administrator,
Remedy Import, or any API program such as RappSvc, BBExecd, etc) to the Action Request System Server.
Many but not all API calls have a subsequent SQL call to the database server. Exceptions include checking user
or form information which may be cached in memory on the Action Request System, and consequently not
require a call to the SQL server. The API calls are logged, but not all the parameters to the API call. API logs
record low-level information which is not expected to be interpretted by customers, except in cases where they
write API programs. The Remedy Programmers Reference describes the Remedy API calls. Most commonly, API
logs will only be requested when written to the same file as the SQL log, so the SQL calls can be correlated to an
API call.
User logging records when users login and logout from the server. This may be useful to troubleshooot cases
where the number of logged in users is dfferent from that expected, but is mostly used independent of the other
logs.
Thread logging is new in ARS 4.5x, to track thread activity. It is used primarily to investigate performance issues
and the load balancing behavior of the ARS Server.
The basics of reading logs
There is a unique prefix at the beginning of each line, indicating what workflow log the line is from. This is
particularly helpful when you enable multiple logs to a single file.
- Filter log
- API log
- SQL log
- Workflow log (ARS 4.5x, Active Links and macros)
- Active Links
- Macro logging
Server side logs also include a server timestamp and login name of the user initiating the action on each line.
Most of the information that will concern you, however, is further to the right so scroll right to see the workflow
activity.
All logs begin with a line indicate the log has begun, and end with a line indicating it has been disabled. The
looks something like the following:
/* Thu May 17 2001 10:52:03.9740 Demo */
Filter Trace Log -- ON
à.
/* Thu May 17 2001 10:52:03.9840 Demo */
Filter Trace Log û OFF
After this, they will include a line indicating the start of some operation. For example, if a user was to create a
new request, there would be a line indicating the beginning of this operation:
Start filter processing -- Operation û CREATE
or
Start active link processing -- Operation - On Window Open
And a line indicating the end of that operation:
Stop filter processing
or
Stop active link processing - On Window Open
Note that in the Active Link/Workflow logs, the end line gives an indication of which operation is ended, whereas
the server side logs just indicate the end of the current operation.
Immediately following the line beginning the operation are two or three lines with additional information about
the operation. Below are three examples. The first is from a filter operation, selecting request # 7 from the AR
4.5 Sampler form. The second is an active link log segment, recorded when opening the base_ReportTemplate
form in a "Create a New" window. The third is another active link log, but on the operation of selecting an item
from a character menu. This sniplet tells us the name of the field with the attached menu, and indicates the menu is accessed when the form is open in a Create a New window. As we'll discuss below, information such as
the form name, user name, and the Request ID will help us find the transaction of interest, by keyword
searching for these values in the log.
Start filter processing -- Operation - GET
AR 4.5 Sampler û 000000000000007
Start active link processing -- Operation - On Window Open
For Schema - base_ReportTemplate
On screen type û CREATE
Start active link processing -- Operation - On Menu Choice
For Schema - base_ReportTemplate
Linked to field - Primary Audience (600001643)
On screen type - CREATE
In between the start and end line of the operation, you would see each workflow object defined to execute on
this operation is checked, in the sequence defined by their execution orders. In filter logs, you will also see filters
which are disabled, but in workflow (active link) logs, active links which are disabled are not reported in the logs.
For each workflow object checked, it will report PASSED if the Run If qualification is true, and will show the If
actions performed. If the Run If qualification is not met, the workflow will be reported as FAILED qualification,
and any Else actions will be performed, if any are defined.
Example, from filter log:
Checking CreateMessage (500)
Passed -- perform actions
And from workflow log:
Checking base_Report: Window Close (0)
Passed qualification -- perform if actions
There can be multiple If Actions, or Else Actions, or both, defined for a filter or active link. In the log, they will be
listed in the order defined in the workflow, with the first one being zero. Below, is an example of an Active Link
which performs three actions, a Set Fields action, a Push Fields action, and a Change Fields action.
Checking SeveralIFActions (0)
-> Passed qualification -- perform if actions
0: Set Fields
Assigned To (4) = Demo
Status (7) = 0
1: Push Fields
To Schema User on Server @
2: Set Characteristics
For field -- Short Description (
Change field to read-only
Two important things to note about the log example above. First, it shows the actual values set, in the Set Fields
action. This is information you cannot get by looking at the Active Link itself, since the value might be set from a
user entered field, or from a different record than expected.
Second note: the partial log above does not tell you from where the data was pulled, or how exactly the Set Fields action is defined. The Set Fields action may have been hard-coded to the value Demo, or it could have
been defined to be populated from a keyword, a user defined field, or a field on another form. It also doesn't tell
you if a pick list was presented to the user, if multiple entries matched, and the user selected one with a value of
Demo.
Thus, making appropriate log files does not entirely replace looking at the workflow definitions, but it does help
direct your investigation by recording what happened and with what data.
Sequence of operations
Below is an example of a filter log, performing a filter which has several If Actions. Filter logs are more involved
than Active Link logs, because filter actions occur in multiple phases. (See ARS 4.5 Server Administrator's Guide,
Ch 5, for more information.) In the example, you can see that the Push Fields action is defined first in the filter,
but the action is actually performed after the Set Fields and Message actions.
Checking FilterWithSeveralActions (500)
--> Passed -- perform actions
0: Push Fields
1: Set Fields
Assigned To (4) = Myself
2: Message
Your request has been processed
/* Sun Jun 10 2001 15:14:51.8000 */ End of filter processing (phase 1)
/* Sun Jun 10 2001 15:14:51.8000 */ Restart of filter processing (phase 2)
/* Sun Jun 10 2001 15:14:51.8000 */ 0: Push Fields
Email Address (103) = youraddress@yourcompany.com
/* Sun Jun 10 2001 15:14:51.8000 */ Start filter processing -- Operation - SET
User - 000000000000001
The example above is simple, because there is just one filter defined on this form. In real world situations, the
first instance of "0: Push Fields" and the second instance of it, where the action is performed in Phase 2, may be
separated by several pages of other filters executing, and in Phase 2, there may be several deferred Push Fields
actions.
There are a few tricks you can use to make examining the log easier. First, note the line:
when the Push Fields action is performed. Another thing you can seach for is
0: Push Fields
though this may instead find another filter that has a Push Fields action defined in it.
Choosing the right Text Editor to view log files
All of the log files described above are text-based, and you can use any text editor to view them.
With experience, you'll find there are several features in a text editor that are useful in viewing log files:
Enable/disable word wrap - The server side logs in particular are very long, so if you have word wrap selected in
your viewer it may be hard to see what is going on in the log, as a single line will wrap in your window to be two
or three. Typically, it's preferable to disable word wrap and scroll to the right to see the other information. The
ability to specify a smaller font may also be useful, so you can fit more per line. Keyword search both forward and backward - Notepad allows you easily search up or down in the file from your
current field selection, as does MS Word. Wordpad only allows you to search down.
Handling of LF characters - Log files generated on a UNIX server have line feed characters, rather than CRLF
characters, to separate lines. If you copy that log file to a Windows system and view it in Notepad, the carriage
return is not added, and the lines are not separated. Viewing the file in Wordpad makes it easier to read. If you
copy/paste it into MS Word, and then copy/paste it from Word, the CR characters are added so you can "convert"
it to a format easily viewed in other text editors.
How it handles the file in memory - when viewing huge log files, you may want to use Wordpad, or an editor
which handles memory effectively, rather than opening the whole file in memory.
Bookmarks and color coding - Programmers text editors often include the ability to add bookmarks or define
strings to be displayed in a different color. This may be helpful when viewing large log files, by having Start and
End operations in a particular color.
Things to do to make logs more helpful
When making logs to investigate problems or scope changes, there are several things you can do to make them
easier to examine. One is to keep them small, running the logs only long enough to capture the targeted
behavior. Server side logs can become large in a short period of time, so enabling logs after hours or at a time
when other user activity is minimized is preferable. The new ability to enable server-side logging from Remedy
User 4.5 is a big help in this, allowing you to limit server logs to activity generated by a single user. When doing
this, remember to leave the logs on long enough to capture the behavior, confirming its completion before
disabling the logs. A new tool available with ARS 4.5x allows you to look at log files in real time, to facilitate this.
See "Server Administrator's Guide page 2-19 for a description of using arlogdisplay.exe.
Second, make a note of the request ID, the login name of the user, and the steps performed when the logs are
recording. This will make it much easier to search for the relevant part of the logs.
On ARS Server versions prior to 4.5, separate log files are generated for each server process, using the RPC
numbers. For example, enabling filter logging on a server with MPSO (multiple fast and list servers enabled),
may generate several log files in the same location named: arfilter.log, arfilter.log.390621, arfilter.log.390635,
etc. Remember to search/send all of these log files, as the action may be distributed between multiple server
processes, so examining all the log files is necessary to verify no information is lost. When convenient, you may
want to temporarily disable MPSO (disable all fast and list servers) before making these server side logs on
servers earlier than version 4.5. Typically, that is only practical on test servers.
Finally, make a note of the behavior which you observe. Error messages and text may be helpful in searching the
log. If there is a difference in behavior in two different situations, such as different users or different
workstations, make two sets of logs, and note which went with which scenario. This is information which is easy
to capture at the time that you are making logs, but difficult to remember afterwards, so it's good practice to jot
notes as you make the logs.
Using logs to diagnose a few common problems and questions
A few common scenarios where log files are useful to diagnose or investigate functionality include:
Scenario 1: A field is being set to a particular value on submit, but not sure why, or from where the value is
being pulled.
Using logs: If the update to the field appears as soon as the request is saved, it may be a filter or an active link
which makes the update, but most likely not an escalation. (if the change only appears several minutes later,
then it very likely could be an escalation.). To investigate: Enable both filter and Active Link logs, and recreate the behavior. In searching the logs, you may search for the field name updated, the field id, or the value.
Remember that selection lists or option fields will show the value as the integer position. For example, it'll be set
to 0 for the first value.
Scenario 2: An error or warning message appears when you try to make a certain modification to a request,
indicating you must update some other field. You want to determine the source of this error, and may want to
disable it.
Using logs: The message is given by a Message action. As with the previous example, it can be given by a filter,
active link, or escalation, and the most likely candidates are the first two. Enable both filter and active link
logging, recreate the behavior, and search the logs for either the error message, or ": Message" (that is "colon
space Message").
Scenario 3: You want to customize or mimic the behavior of the application in one place, and want to know what
is involved.
Using logs: Since filter and active link logs show you the workflow defined on the operation, even if it does not
pass qualification, it is a more comprehensive view of what functionality may be involved in your change. If you
enable logs and see that there is significant workflow behind the operation, you can better determine if the
change is a simple one, or more of a project.
Ref:
See the "Server Administrator's Guide, Action Request System 4.5" Chapter 2 for additional information and
example of log files.
giby.varghese@gmail.com- Posts : 107
Points : 222
Reputation : 3
Join date : 2009-11-11
Similar topics
» How to Clean the wrong server name from Workflow.
» Filter notifications with 'add shortcut' creates url in Email that doesn't work. More then 30 characters in the servername ARERR [9280]
» The 'Related to workflow' is blank after sync search database. why?
» How do I make the Related Workflow tab to appear on the Field Properties Window in the Admininstrator Tool?
» Filter notifications with 'add shortcut' creates url in Email that doesn't work. More then 30 characters in the servername ARERR [9280]
» The 'Related to workflow' is blank after sync search database. why?
» How do I make the Related Workflow tab to appear on the Field Properties Window in the Admininstrator Tool?
Page 1 of 1
Permissions in this forum:
You cannot reply to topics in this forum