11/29/2016 7:45:14 PM
3.10.13.10.2
(#820)"
This reverts commit 6207561922380f4dbed5f09e7b390e3a39d6779a.
|
11/28/2016 3:01:16 PM
3.10.0
error output to a log file correctly (#833)
Previously the error output was still sent to the stdout.
|
11/28/2016 2:48:12 PM
from DB (#834)
Let the exception from JSON decode method to throw and catch it in the caller where more information is available to be logged. This whay it is easier to identify which project is having this issue from logs.
|
|
11/23/2016 10:07:00 PM
Option to create Kafka Appender for user job logs (#778)
JobRunner now checks for properties in jobs to check if Kafka logging is
enabled. If it is, then it will attempt to create a Kafka appender. If
that fails, it will report the failure but not crash the application.
User side:
A basic example usage of this would be to drop a properties file (named maybe like kafka.properties) with the following lines into the root directory of your project:
azkaban.job.logging.kafka.enable=true
This property should cause all the jobs in your project to log to Kafka, using a producer provided by the executor server. Note that the executor server can override this property and refuse to provide this producer.
The property azkaban.job.logging.kafka.enable is by default false, so if left alone, the default behavior is to not log to Kafka.
Server admin side:
There are two properties to configure the Kafka producer that would be used as a logger for user jobs and they can be set in azkaban.properties. They are:
azkaban.server.logging.kafka.brokerList
azkaban.server.logging.kafka.topic
If any of these two properties are missing, then the executor server will refuse to attempt to make the producer. This can doubly act as a killswitch if the server admins decide they don't want anyone logging to Kafka, or don't have a Kafka cluster set up.
If during the creation of the Kafka logger, something wrong happens, then we will fail silently (not crash the app), and report the failure in the flow log.
* Refactored Constants.java into separate files and added new constants
Added some configuration strings into the Constants.java file. These
configurations specify properties to manage user job Kafka logging, as
well as job and flow level properties.
* Added server props object to job and flow runners
Changed job and flow runner objects to have a copy of the Azkaban
Executor server properties, so that when running flows we can activate
server wide configurations.
As a result, a bunch of tests had to be altered.
|
11/23/2016 5:24:37 PM
(#827)
* fixSoloServerExecuteAsUser
* Updated file path to be consistent with current plugin configuration
|
11/23/2016 12:18:38 PM
adding necessary logs to help trace and debug issues
* log stack trace to logger
|
11/22/2016 5:39:56 PM
(#821) (#822)
When we use log4j to send a JSON message the official PatternLayout will
send corrupt JSON if certain characters are present. We use this
PatternLayoutEscaped class as a thin interface around PatternLayout to
escape all these characters.
An example use case of this is if we are sending JSON with a
KafkaAppender. Multi lined messages result in corrupt JSON, requiring
that the new lines be escaped before being appended.
So far we know we need to escape backslashes, tabs, newlines and quotes.
Unit tests were added for these cases.
|
11/21/2016 8:12:14 PM
dumps the current server port to executor.port file. On shutdown clean it up.
Also remove process ID files which are created upon launch.
|
11/20/2016 4:36:58 AM
setting schedule panel conf
* removing deprecated schedule panal js
* Delete deprecated flow vm page
|