1/11/2019 3:57:21 PM
(#1994)
* Don't delete (clean) project versions with running executions
When a new version of a project is uploaded, the old versions are cleaned up (zip files are deleted from the DB).
This way the project versions with running executions are skipped in the old versions cleanup.
Even those versions will be eventually cleaned up once those executions have finished, because the basic filter is `version < ?`
* Use ExecutorLoader instead of RunningExecutions
* Use unfinishedExecutions to also cover queued flows
As an optimization, fetch the unfinished executions without full flow data, as that's not needed
* Include submit_user in unfinished flow metadata
* Renamed method: fetchUnfinishedExecutions
* Renamed method: getExecutableFlowMetadataHelper
* More naming fixes: ~fetchUnfinishedFlowsMetadata
* Improve javadoc of cleanOlderProjectVersion
* Fix test name -> testFetchUnfinishedFlowsMetadata
* Implement MockExecutorLoader#fetchUnfinishedFlowsMetadata
|
1/7/2019 5:44:13 PM
3.66.0
to make it more exposable to azkaban users as we have heard from internal users that they didn't know some of the useful azkaban features(like auto-retry on job failure) until they did fair amount of researching.
|
1/7/2019 5:07:05 PM
Log kill command failures when killing jobs.
* Remove unused code
|
1/4/2019 8:12:11 PM
(#2085)
|
|
1/2/2019 11:11:54 PM
when flow finishes (#2080)
|
|
|
12/21/2018 6:00:06 PM
flows at node level (#2073)
In most cases the centering event causes a zoom out so users need to zoom in the graph again to return to previous view.
|
12/21/2018 5:58:27 PM
usability In the Flow Execution page, Job List tab, all FAILED and KILLED jobs are shown by default now. If they are embedded their parent flows are expanded. Only flows containing FAILED or KILLED jobs will be expanded. This is especially helpful with deeply nested flows because users won’t have to manually expand each flow to access the logs of FAILED/KILLED jobs.
* Fixes according to review comments
|