-
Notifications
You must be signed in to change notification settings - Fork 512
Coyote accelerator backend #1347
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
| while (coyote_thread.checkCompleted(coyote::CoyoteOper::LOCAL_TRANSFER) != batch_size) { | ||
| std::this_thread::sleep_for(std::chrono::nanoseconds(50)); | ||
| } | ||
| while (coyote_thread.checkCompleted(coyote::CoyoteOper::LOCAL_TRANSFER) != batch_size) {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't this cause 100% CPU usage while the program is polling?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On one of the cores, yes.
But sleep for less than 50us is not well-defined on most Linux platforms. Hence, measured latency can go from ~4us to >50us even though the "true" execution latency is still 4us.
…-backend (fastmachinelearning#1347) Merge branch 'init_interval_fix_zeropad_maxpooling' into coyote-accelerator-and-pooling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few misc comments based on trying to run the CoyoteAccelerator for a dummy model. Right now, i am stuck with a python import error:

which is puzzling because I do have jinja2 installed in my environment and the same import works fine in an interactive python session.
Also, can you fix the pre-commit issues?
| filedir = os.path.dirname(os.path.abspath(__file__)) | ||
| srcpath = os.path.join(filedir, '../contrib/Coyote/') | ||
| dstpath = f'{model.config.get_output_dir()}/Coyote' | ||
| copytree(srcpath, dstpath) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to use the dirs_exist_ok argument here? In the current version, this fails when running for the same project twice.
| output_dir='hls4ml_prj_coyote', | ||
| backend='CoyoteAccelerator', | ||
| board='u55c') | ||
| hls4ml.build(bitfile=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should probably be hls_model instead of hls4ml
| ) | ||
|
|
||
| if not os.path.exists(f'{model.config.get_output_dir()}/build/{model.config.get_project_name()}_cyt_hw'): | ||
| os.mkdir(f'{model.config.get_output_dir()}/build/{model.config.get_project_name()}_cyt_hw') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this needs to use os.makedirs() because the build folder doesn't exist already.
| Example | ||
| ====================== | ||
|
|
||
| Similar to the ``VivadoAccelerator``backend, we first generate a bitstream from a Keras model ``model`` and a config. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Documentation should mention that hls4ml needs to be cloned with the submodules checked out to get Coyote, and that a Vitis installation is needed to be present.
Description
Generally, Coyote offers several advantages, when compared to some other shells, including:
The backend is briefly described in Section 9.7 of the paper: https://arxiv.org/pdf/2504.21538.
Type of change
Tests
This backend was compared agains a modified* version of the VivadoAccelerator backend: the backend was modified to run HLS synthesis with Vitis instead of Vivado (also using Vitis templates and optimizers), while the rest of the backend infrastructure (drivers, data movers remained the same since they also work in newer version of Vivado). Results are attached below - clearly indicating an advantage in Coyote, for two reasons (1) optimised data movement, bypassing card memory and (2) optimised host-side library (Python, C++).
In principle, the correct test would be to compare against VitisAccelerator (#991), but only after the io_parallel issues are resolved. However, the expectation is that the result will stay mostly the same, sine the underlying platform requires a data copy between host and card memory.
Will add some more results, also for io-stream CNN, and comparisons to VitisAccelerator.
Figure above: comparison of CoyoteAccelerator with modified Vivado Accelerator for the UNSW-NB15 dataset in io_parallel.
Checklist
pre-commiton the files I edited or added.