9/13/2017 12:20:00 PM
to azkaban.properties to be able to override mail
hostname and port number links. // User facing web server configurations used to construct the user facing server URLs. They are useful when there is a reverse proxy between Azkaban web servers and users. // enduser -> myazkabanhost:443 -> proxy -> localhost:8081 // when this parameters set then these parameters are used to generate email links. // if these parameters are not set then jetty.hostname, and jetty.port(if ssl configured jetty.ssl.port) are used. public static final String AZKABAN_WEBSERVER_EXTERNAL_HOSTNAME = "azkaban.webserver.external_hostname"; public static final String AZKABAN_WEBSERVER_EXTERNAL_SSL_PORT = "azkaban.webserver.external_ssl_port"; public static final String AZKABAN_WEBSERVER_EXTERNAL_PORT = "azkaban.webserver.external_port";
|
9/12/2017 5:05:57 PM
(#1454)
This method is only used in tests.
Make the name more descriptive.
This is particularly helpful when doing code reviews.
|
9/12/2017 1:29:02 AM
options in schedule page
|
9/11/2017 10:41:26 PM
this change, support for reporting events of interest from azkaban has been provided.
The default implementation uses a kafka event reporter. Users can provide alternate
implementations of the reporter. To begin with, the following events are reported. 1. FLOW_STARTED 2. FLOW_FINISHED 3. JOB_STARTED 4. JOB_FINISHED
In future, this can be easily extended to report other events. The default event reporter
implementation uses gobblin-metrics, which provides conevenient methods for event creation
and submission. Gobblin-metrics also provides a schema for the events. The default
implementation uses the kafka async producer.
Configuration changes:
Note: All changes must be applied to the executor server or the solo-server if in solo mode.
// Property is used to enable/disable event reporting with the default being false.
event.reporter.enabled=false
// Alternate implementations of the reporter can be specified using this property.
event.reporter.class=com.foo.EventReporterImpl
// Kafka topic name for the default implementation.
event.reporting.kafka.topic=TestTopicName
// Kafka broker list for the default implementation.
event.reporting.kafka.brokers=hostname.com:port_num
// Schema registry server for the default kafka implementation.
event.reporting.kafka.schema.registry.url=schemaRegistryUrl.com:port/schema
Testing Done:
End-to-end tests were performed and the following scenarios were verified. 1. Consumed the emitted kafka events and messages received were as expected. 2. Event reporter enabled/disabled scenario. 3. Kafka broker down scenario. 4. Reporter enabled, but kafka brokers undefined in the config scenario.
|
9/11/2017 9:41:53 PM
Wait that FAILED job event has been handled before succeeding the other jobs.
- Extra: make InteractiveTestJob more reliable with volatile fields
|
9/11/2017 7:11:11 PM
(#1444)
|
9/11/2017 3:00:46 PM
module doesn't belong to the common module.
It also helps speeding up the build a bit by taking better advantage of
the gradle cache and project dependency management.
Fixes #1227
|
9/11/2017 2:13:12 PM
the job6 status was SUCCEEDED if the flow killing took longer than usual, because it only slept for max 1 second.
Failure from Travis logs:
azkaban.execapp.FlowRunnerTest > exec1FailedKillAll FAILED java.lang.AssertionError: Wrong status for [job6] expected:<KILLED> but was:<SUCCEEDED>
|
9/9/2017 9:51:00 PM
used by the internal build system.
|
|