Skip to content

Conversation

@bo3z
Copy link
Contributor

@bo3z bo3z commented Jul 28, 2025

Description

📝 This PR introduces a new accelerator backend, CoyoteAccelerator, which leverages the open-source Coyote shell for deploying models on a PCI-attached FPGA.

Generally, Coyote offers several advantages, when compared to some other shells, including:

  • Networking support, so the backend can easily be extended to support distributed inference. Also interesting for in-network ML.
  • GPU - FPGA integration, so models can be executed on a combination of hardware
  • Dynamic reconfiguration, which could allow run-time reconfiguration of models
  • Multi-tenancy, so multiple models could be deployed concurrently.

The backend is briefly described in Section 9.7 of the paper: https://arxiv.org/pdf/2504.21538.

Type of change

  • New feature (non-breaking change which adds functionality)
  • A new research paper code implementation

Tests

This backend was compared agains a modified* version of the VivadoAccelerator backend: the backend was modified to run HLS synthesis with Vitis instead of Vivado (also using Vitis templates and optimizers), while the rest of the backend infrastructure (drivers, data movers remained the same since they also work in newer version of Vivado). Results are attached below - clearly indicating an advantage in Coyote, for two reasons (1) optimised data movement, bypassing card memory and (2) optimised host-side library (Python, C++).

In principle, the correct test would be to compare against VitisAccelerator (#991), but only after the io_parallel issues are resolved. However, the expectation is that the result will stay mostly the same, sine the underlying platform requires a data copy between host and card memory.

Will add some more results, also for io-stream CNN, and comparisons to VitisAccelerator.

Screenshot 2025-07-28 at 12 11 24

Figure above: comparison of CoyoteAccelerator with modified Vivado Accelerator for the UNSW-NB15 dataset in io_parallel.

Checklist

  • I have read the guidelines for contributing.
  • I have commented my code, particularly in hard-to-understand areas.
  • I have made corresponding changes to the documentation.
  • My changes generate no new warnings.
  • I have installed and run pre-commit on the files I edited or added.
  • I have added tests that prove my fix is effective or that my feature works.

while (coyote_thread.checkCompleted(coyote::CoyoteOper::LOCAL_TRANSFER) != batch_size) {
std::this_thread::sleep_for(std::chrono::nanoseconds(50));
}
while (coyote_thread.checkCompleted(coyote::CoyoteOper::LOCAL_TRANSFER) != batch_size) {}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wouldn't this cause 100% CPU usage while the program is polling?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On one of the cores, yes.

But sleep for less than 50us is not well-defined on most Linux platforms. Hence, measured latency can go from ~4us to >50us even though the "true" execution latency is still 4us.

@JanFSchulte JanFSchulte added this to the v1.3.0 milestone Nov 5, 2025
lorenzo-as pushed a commit to lorenzo-as/hls4ml that referenced this pull request Dec 9, 2025
…-backend (fastmachinelearning#1347)

Merge branch 'init_interval_fix_zeropad_maxpooling' into coyote-accelerator-and-pooling
Copy link
Contributor

@JanFSchulte JanFSchulte left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few misc comments based on trying to run the CoyoteAccelerator for a dummy model. Right now, i am stuck with a python import error:
image

which is puzzling because I do have jinja2 installed in my environment and the same import works fine in an interactive python session.

Also, can you fix the pre-commit issues?

filedir = os.path.dirname(os.path.abspath(__file__))
srcpath = os.path.join(filedir, '../contrib/Coyote/')
dstpath = f'{model.config.get_output_dir()}/Coyote'
copytree(srcpath, dstpath)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we want to use the dirs_exist_ok argument here? In the current version, this fails when running for the same project twice.

output_dir='hls4ml_prj_coyote',
backend='CoyoteAccelerator',
board='u55c')
hls4ml.build(bitfile=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be hls_model instead of hls4ml

)

if not os.path.exists(f'{model.config.get_output_dir()}/build/{model.config.get_project_name()}_cyt_hw'):
os.mkdir(f'{model.config.get_output_dir()}/build/{model.config.get_project_name()}_cyt_hw')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs to use os.makedirs() because the build folder doesn't exist already.

Example
======================

Similar to the ``VivadoAccelerator``backend, we first generate a bitstream from a Keras model ``model`` and a config.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Documentation should mention that hls4ml needs to be cloned with the submodules checked out to get Coyote, and that a Vitis installation is needed to be present.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants