Skip to main content
Question

Workaround fo node filters in job definition randowly breaks scheduled execution

  • April 16, 2026
  • 2 replies
  • 34 views

BugHunter42
Forum|alt.badge.img+1

Hello,

Our users reported many of their jobs fail to run when scheduled with the following error…
 

com.dtolabs.rundeck.core.NodesetEmptyException: No matched nodes: NodeSet{includes={hostname=node_fqdn.tld, dominant=false, }}
at com.dtolabs.rundeck.core.execution.workflow.BaseWorkflowExecutor.validateNodeSet(BaseWorkflowExecutor.java:880)
at com.dtolabs.rundeck.core.execution.workflow.NodeFirstWorkflowExecutor.executeWorkflowImpl(NodeFirstWorkflowExecutor.java:92)
at com.dtolabs.rundeck.core.execution.workflow.BaseWorkflowExecutor.executeWorkflow(BaseWorkflowExecutor.java:220)
at com.dtolabs.rundeck.core.execution.WorkflowExecutionServiceThread.runWorkflow(WorkflowExecutionServiceThread.java:95)
at com.dtolabs.rundeck.core.logging.LoggingManagerImpl$MyPluginLoggingManager.runWith(LoggingManagerImpl.java:146)
at com.dtolabs.rundeck.core.execution.WorkflowExecutionServiceThread.run(WorkflowExecutionServiceThread.java:77)
Exception: class com.dtolabs.rundeck.core.NodesetEmptyException: No matched nodes: NodeSet{includes={hostname=node_fqdn.tld, dominant=false, }}
No matched nodes: NodeSet{includes={hostname=node_fqdn.tld, dominant=false, }}

 
This error doesn’t make sense in this case as the node specified in the job’s filter actually exists in the project’s nodes list and should match.

The same job from the same project with the same filter works just fine when it’s run manually. This bug happens only some jobs are scheduled.

Has anyone experienced the same issue and found any solution or at least a workaround ?

2 replies

Forum|alt.badge.img
  • PagerDuty Team 📟
  • April 27, 2026

Hi! Usually that error is related to ACL’s, could you double check your ACL policies, probably some rule is blocking the node access. More info here: https://docs.rundeck.com/docs/administration/security/authorization.html ans here: https://github.com/rundeck/rundeck/issues/719#issuecomment-112204541


BugHunter42
Forum|alt.badge.img+1
  • Author
  • New Member 👋
  • April 27, 2026

Hello

Thanks for these links but no, it’s definitely not an ACL-related issue. And obviously job scheduling is allowed.

I’ve been managing Rundeck instances for several years, from back then when you still had an IRC room… If it were ACLs I, would have rapidly find out because I’m quite experienced with Rundeck and I check usually ACLs when logs shows ACLs issues or if I have any doubt about ACLs.

Even if the error message from JVM is similar. My issue is a different one.

My issue is not “Jobs fails unless target nodes explicitly selected”. It’s “Scheduled jobs with node filter fail unless run manually or schedule newly created/cloned jobs instead” 

Most jobs in most projects works just fine work fine based on the same ACLs. Either the ones with nodes filters/default nodes OR newly created jobs with  nodes filters

The ones that doesn’t work well when scheduled, still works without explicitly selecting nodes when you run them manually => Keeping the dezfault node selected by the filter in the jobs definition.

So basically they do work find with the pre-selected, default, node(s) when they are run manually but fail when they are run based on scheduling

To clarify
1) ACLs issues are straightforward to identify/reproduce. It this case, I can’t even reproduce de bug on similar, newly created jobs or even literally new instances of the same jobs in the same projects

2) The ACLs were and are properly configured, including nodes ACLs (otherwise it would still fail for manually run job). And in addition to that, these problematic jobs used to work jsut fine even when scheduled. They suddenly failed without any ACLs updates… They just stopped for unidentified reasons

3) Simply cloning the jobs with this issue, inside the same projects, in the same Rundeck instance, cloned with same users/accounts/users from the same LDAP roles, make the bug “disappear”. But still requires cloning each and every job with such bugs, to make them work again

Our ACLs are at project level, not job level as well as group level (LDAP role), not single-account level, otherwise it becomes unmanageable, since we have way to many jobs and users manage their jobs once they got their projects created and appropriate ACLs configured. So it doesn’t matter that newer job clones, have different names, it still uses the same ACLs inside the same projects for the same users