Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal for a more comprehensive approach to VUnit co-simulation #603

Open
bradleyharden opened this issue Dec 2, 2019 · 17 comments
Open
Labels
CoSim ThirdParty: cocotb Related to cocotb.

Comments

@bradleyharden
Copy link
Contributor

@LarsAsplund, @kraiger and @umarcor,

I would like to propose a broad strategy for co-simulation with VUnit. I really like what @umarcor has been working on, but I think his approach leaves most of the implementation to users. If I understand #568 correctly, users have to manually synchronize between VHDL and C/Python using clock signals run through a string_ptr. That approach is extremely powerful and flexible, but to me, it doesn't seem very easy to use or approachable for new users.

So far, I've found VUnit to be extremely easy to learn and use, because the VHDL libraries abstract away many of the low-level details. I never even knew string_ptr_pkg existed until I started reading the VUnit codebase out of interest. However, even with simple, easy-to-use libraries, I have struggled to convince my co-workers to learn and use VUnit. I fear it will be even harder to convince them to use VUnit for co-simulation if they have to implement most of the low-level details.

Based on the work I've already done for #521 and #522, and the discussion in #583, I would like to propose a more comprehensive approach to co-simulation. Essentially, I would like to communicate with C/Python code in the same way that I currently communicate with concurrent actors in VHDL, using messages and queues. I think this approach will require more work than what @umarcor has proposed, but I think the end result will be much more user-friendly. Furthermore, there is a lot of overlap between this proposal and my work on #521 and #522.

To support my proposal, I spent some time this past week prototyping various pieces. Here is an overview of a roadmap, as I see it:

  1. Implement ext_ptr.c and ext_ptr_pkg - Mostly complete

This library is essentially a port of string_ptr_pkg and integer_vector_ptr_pkg to C. It serves the same purpose in C as those packages do in VHDL. It is only responsible for providing shared memory among concurrent actors. It is not used for synchronizing between actors. I already have a complete, working implementation, with tests, here.

  1. Modify string_ptr_pkg and integer_vector_ptr_pkg to cast to and from ext_ptr_t - Trivial

This change would make it possible to create instances of string_ptr and integer_vector_ptr in VHDL that are stored externally. This is the infrastructure for implementing external queues. Because ext_ptr_pkg and ext_ptr.c mimic the structure of the two VHDL ptr packages, adding this would be trivial.

  1. Update queue_pkg to push and pop instances of string_ptr rather than character, so that you can push and pop whole items - Complete

This approach is faster than the current implementation, and it is more useful, because the queue length now tells you how many items are left in the queue. #522 is a working and tested implementation. Performance comparisons are available in #521.

  1. Add external queues - Straight forward

Modify queue_pkg to accept an external option. For a given queue, if external is true, the queue would create external rather than internal instances of string_ptr, and it would push and pop using a C library, ext_queue.c.

  1. Modify the codec libraries to encode all VHDL types in a binary compatible manner - Mostly complete

With this change, any VHDL type encoded into an external string_ptr could be read directly by C code, without any decoding step. I have already done this while working on my changes for #521. I can make this code available, if you would like to see what I've done.

  1. Generalize the VUnit type system to make it modular and extensible - Mostly complete

This has two benefits. For co-simulation, it will set the stage for pushing and poping any VHDL type from external queues. More generally, it will allow us to create new VUnit data structures, like list_t. It will also make it easy to extend the existing dict_t to all types. The latter were my initial objectives in #521. As above, I have already implemented most of this. I just haven't made it public yet.

  1. Extend the type definitions from VHDL to C. - No progress yet

As stated above, a generic type system can be easily extended to C, which will allow users to easily read and write VHDL types encoded by VUnit. This is the last step needed for a full implementation of external queues.

  1. Extend the com library from VHDL to C - Prototyping complete

This is the final piece that brings the whole proposal together. With this change, we can treat external actors like any other VUnit actor. All synchronization between VHDL and C/Python can be handled with messages. I have already prototyped the actor_t, actor_item_t, msg_t, envelope_t and mailbox_t data structures in C. With them, I demonstrated that you can add external actors, find them in VHDL, and push/pop messages (see here).

To send a message to an external actor, you simply have to create an external message that uses an external queue to store its data. The actual synchronization happens when calling the com function notify. When this function is called, it would call an ext_com.c function that would run a list of C functions specified by users. These C functions would essentially act as concurrent processes do in VHDL. When run, they would check for new messages and act accordingly.

What do you think? Do you agree that this approach would be easier for users? I'm eager to work on this, but I don't want to devote more effort without buy-in from you three.

@LarsAsplund
Copy link
Collaborator

@bradleyharden I'll read your proposal in detail later but enabling message passing over language boarders is certainly one of the goals with the work that has started.

Another statement of yours caught my eye

However, even with simple, easy-to-use libraries, I have struggled to convince my co-workers to learn and use VUnit.

It's always interesting to hear inside stories about how VUnit is perceived. Could you elaborate on this? Why is this hard? What is the approach used instead?

@umarcor
Copy link
Member

umarcor commented Dec 2, 2019

@bradleyharden, overall the proposal sounds astonishingly well!

Essentially, I would like to communicate with C/Python code in the same way that I currently communicate with concurrent actors in VHDL, using messages and queues.

As a matter of fact, my initial objective was not to co-simulate Python and VHDL, but C and VHDL only. I made cosim.py and the VUnitCoSim prototype after talking with @LarsAsplund, since he was precisely interested on having actors in Python which are equivalent to the ones in VUnit's VHDL libs. What you propose is exactly that.

I think his approach leaves most of the implementation to users. If I understand #568 correctly, users have to manually synchronize between VHDL and C/Python using clock signals run through a string_ptr. That approach is extremely powerful and flexible, but to me, it doesn't seem very easy to use or approachable for new users.

There are several issues here:

These external modes are currently only supported with GHDL, with some limitations:

  • With the latest stable version of (v0.36), it does not work with mcode backend. It is fixed in master, but we need to wait until v0.37 is released in order to update the CI jobs and test it with any backend. I think that Tristan wants to wait until synth features are out of beta to do so.
    • Even with v0.37, VUnitCoSim Python scripts require LLVM or GCC. This is because the execution flow expects an executable binary to be created. In order to use mcode, this needs to be changed. The same might apply to other simulators which support VHPIDIRECT/FLI.
  • Apart from that, although C-VHDL co-simulation will work on either GNU/Linux or Windows, the Python cosim.py example in VUnitCoSim #568 will not work on Windows.

That's mostly why finding/getting C sources and building the objects is done in the run.py (https://github.com/dbhi/vunit/blob/vunitcosim/examples/vhdl/external_buffer/run.py#L77) file without explicit built-in helpers. It is also why only lower level types are implemented for now.

Moreover, as commented in #481, the existing PLI option might be tightly related to this. Existing sources in vunit/vhdl/data_types/src/external/ghdl and vunit/vhdl/data_types/src/external might already work as PLI modules, with minor modifications. If that is the case, we would all have stronger feelings to push features into the codebase, since those would be supported on three simulators, at least.

Regarding clock signal management, I agree with you, it is powerful but not approachable to new users. It is specially confusing that a single array is used for multiple unrelated params. However, it is a workaround, not a definite solution. Precisely, as commented in ghdl/ghdl#803, when using VHPIDIRECT, GHDL does not currently support features such as ghdl_runfor(10) or ghdl_rununtil(100). Therefore, clk_pkg (doing clock gating) is the solution I found because I was not capable enough to extend GHDL. It is loosely based on @hackin's work, in the sense that it allows to select between diffrent clock frequencies to slow down or accelerate the simulation; but, in practice, all it does is simulate lots of clock cycles where eveything remains idle. From a CPU usage perspective, it would be really handy to have ghdl_stop, ghdl_continue, etc.


The following thoughts are mostly to enhance the discussion. I think that we don't need to address them for now, because none of the inmediate modifications or what you already did does conflict with this yet.

Note that I named vhpidirect_user.h similar to vpi_user.h to use it as a canary. IMHO, the main point in favour of VHPIDIRECT is that it is much easier to understand and use than VPI. If vhpidirect_user.h is going to get as complex as vpi_user.h, we might need to reconsider the approach and have a look at cocotb (VPI) instead. I don't mean that implementing the changes to the queue and actors on the VHDL does not make sense; precisely, I think it does because I believe that the communication model in VUnit is quite different from cocotb. However, I don't know at what point the C part would be better done in VPI.

This is specially so because VPI support in GHDL is better than VHPIDIRECT. For example, regarding step-by-step execution of the simulation. Nonetheless, with a queue based simulation infrastructure, it is easy to keep the design on hold just by controlling the throughput of input data.

Anyway, it might be feasible to maintain two implementations (VHPIDIRECT and VPI). For this to make sense, we need to check which simulators support VPI only, which VHPIDIRECT/FLI only, and which both of them.

Precisely, yesterday I found about this GHDL and Xyce co-simulation with cocotb: https://www.osti.gov/servlets/purl/1488489 (Xyce/Xyce#2). While that report uses VPI, this other article (https://www.osti.gov/biblio/1483152-application-note-mixed-signal-simulation-xyce) states:

"can be coupled with external simulators via either a Python-based interface that leverages the Python ctypes foreign function library or via the Verilog Procedural Interface (VPI)".

I believe that what we are talking about in these issues is equivalent to what they describe as a "Python-based interface that leverages the Python ctypes foreign function library". Precisely, they use DAC and ADC modules to implement interfaces between digital and analog modules. So, in a sense, all their infrastructure is based on queues of custom types (let's name type analog_signal_t). From our point of view, you are describing the equivalent to XyceClnterface (section 2 of the second article), while VUnitCoSim's cosim.py files are roughtly equivalent to Python Wrappers to XyceClnterface (section 3).

I think that it is casually fortunate that VUnit just started using mypy for static typing. In the context of keeping a Python wrapper around a C API, I think that it can be very useful.


I think this approach will require more work than what @umarcor has proposed, but I think the end result will be much more user-friendly.

I'd be glad to help, once we write down specific tasks. I was about to have a look at #470 again, but I think that it can be better to first have stream VCs using external queues, before making the memory model external.

I already have a complete, working implementation, with tests, here.

At first glance, some reorganization is required. As commented, it should be feasible to use a simulator which supports FLI just by replacing the *-vhpi.vhd files in https://github.com/VUnit/vunit/tree/master/vunit/vhdl/data_types/src/external, while maintaining all other VHDL and most of C sources. By the same token, the content in data_types/src/ext_ptr should be reorganized so that users/developers need to change the minimum subset to adapt the feature. Regardless, it looks really beautiful. My most honest congratulations.

This change would make it possible to create instances of string_ptr and integer_vector_ptr in VHDL that are stored externally.

I assume that ext_acc and ext_fcn are removed, and you don't bind any access type between VHDL and C. I.e., pointers are not shared, only function call (callbacks) are executed in VHDL. Is this correct?

Update queue_pkg to push and pop instances of string_ptr rather than character, so that you can push and pop whole items - Complete

How far is it from having functional 1, 2, 3? Is https://github.com/bradleyharden/vunit/tree/ext_pkg_prototypes/ the branch where you merged everything so far?

I can make this code available, if you would like to see what I've done.

I wonder which is the procedure to send a VHDL record type and access it as a C struct. Does the user need to define both types and you make the bytes fit? Isn't it possible that compilers reorder some fields in the struct?

As stated above, a generic type system can be easily extended to C, which will allow users to easily read and write VHDL types encoded by VUnit. This is the last step needed for a full implementation of external queues.

I see that this is similar to #470 (comment). That is, data is stored in C, but the content can only be properly interpreted in VHDL. From C, you just get a bunch of bytes (for now).

I think that here is where ext_acc might fit. It allows to bind an access to a custom VHDL type with a custom C type. I think that it would be robust enough to survive compiler's optimizations. So, maybe, the way to go would be to use ext_fcn for vectors and queus, and ext_acc for the custom types of the elements that are to be shared.

All synchronization between VHDL and C/Python can be handled with messages.

I expect the approach in Python to be similar to that in VHDL. So, users can use built-in Python queues and we will provide a class (say ext_queue) that uses ctypes to interact with ext_pkg.c. Is it?

Once again, congratulations and thank you for laying out this proposal and the prototypes.

@peshola
Copy link

peshola commented Dec 2, 2019

@umarcor, the SAND Report you reference above is a bit dated. It was updated for Xyce 6.11, in June 2019, as SAND Report SAND2019-6653 ("Application Note: Mixed Signal Simulation with Xyce 6.11 "). There were various bug fixes done for Xyce 6.11. In addition, some of the function signatures in the XyceClnterface class, and the corresponding Python methods, were modified to make them more compliant with ANSI-C. If you're interested in the details then I can work out where to post that newer SAND Report so that it's publicly accessible. (The OSTI site is often several months behind.)

If you look at the the Xyce repos on Github, the source code for the XyceCInterface class is in https://github.com/Xyce/Xyce/tree/master/utils/XyceCInterface. Those functions call methods in the Simulator class defined in https://github.com/Xyce/Xyce/blob/master/src/CircuitPKG/N_CIR_Xyce.h. Let me know if you have specific questions. We can discuss them here, or on the Xyce Google Group.

The directory https://github.com/Xyce/Xyce/tree/master/utils/XyceCInterface has some simple examples of invoking the interface. There are more examples in https://github.com/Xyce/Xyce_Regression/tree/master/Netlists/MIXED_SIGNAL. Unfortunately, those VPI examples use Icarus rather than GHDL.

@umarcor
Copy link
Member

umarcor commented Dec 3, 2019

Hi @peshola! I assume that you are Peter, one of the authors of the articles and the related work on Xyce. Nice to meet you and thanks a lot for coming by!

I created an issue in ghdl/ghdl#1052 to avoid hijacking this thread, which is quite complex already. I will reply you there.

@LarsAsplund
Copy link
Collaborator

While we could make the SW APIs a clone of what we have in VUnit we should also consider integrating ourselves with one of the open message passing frameworks out there. It would make it more accessible to many SW languages. I don't have any experience in this area, just know that they exist.

@umarcor
Copy link
Member

umarcor commented Dec 3, 2019

@LarsAsplund, I believe that it is essentially what I meant with hooks in the last two paragraphs of #583 (comment). We can think of ext_ptr as an API, with a specific 'internal' implementation (Bradley's). I think that the main issue with not using an internal implementation, is that any general message passing framework might not provide all the features.

Once we define the API and provide an implementation in C, it should be trivial to change it to a plugin system, where the internal is just the default. The second plugin to be implemented might precisely be the one that uses Python's internal queues. There can be two variants: one that relies on the internal C, and another one that completely hijacks the API (C is only used to define function signatures between GHDL and ctypes).

EDIT

An example with GHDL and PLI, using #include <sys/ipc.h>; #include <sys/msg.h>: https://github.com/ghdl/ghdl/tree/master/testsuite/gna/issue152

@bradleyharden
Copy link
Contributor Author

@LarsAsplund, I don't want to get too sidetracked responding to your question, but I'll try to explain a little bit. In my case, I think there are a few different issues, but none of them are really the fault of VUnit. Maybe we can move the discussion to Gitter.

First, I gave a presetation on VUnit, but I realized afterwards that I tried to cram in WAY too much information. My objectives for the talk were to: review all features of VHDL-2008 and argue for its use; introduce VUnit as a test bench management tool that could improve our workflow; and introduce the VUnit VHDL libraries with extra information about how they work under the hood. I basically ended up speed-talking through the whole thing, skipped slides at the end, and still didn't have time for questions. We got kicked out of the conference room right as I was finishing. I think people saw value in VUnit, but I think it was like drinking from a fire hose.

Outside of my presentation, there are other, significat barriers. First, most of the projects we are working on right now are either at or beyond the CDR stage. Many people were intrigued by VUnit, but they did not want to revamp their test systems at such a critical time in the project. However, this will change in another year or two. Second, we frequently use the LEON soft-core processor. Gaisler provides a whole simulation and build ecosystem for the LEON, and no one is sure how to integrate VUnit. Personally, I haven't worked with the LEON yet, so I haven't dug in very deeply, but I would like to in the future. Interestingly, I think the LEON ecosystem actually features a very primitive unit test management system. I hope people will see the advantages of expanding on that idea with VUnit.

@umarcor, I have comments on a few of your different points.

  1. Linking

I don't know much about linking. It sounds like there are some issues to work though there. The gcc options seems like the most promising to me, but I'm not sure how best to integrate it with VUnit. I'll probably leave that to you and @kraiger to figure out.

  1. VHPIDIRECT vs. VPI

I haven't looked into VPI, but VHPIDIRECT seems sufficient for what I was envisioning. However, regardless of the protocol, I think we should be delibrate when deciding the role of VUnit in co-simulation, especially with respect to the broader co-simulation community. Personally, in my vision for VUnit co-simulation, I had imagined that VHDL would handle all aspects of simulation control flow. External actors would always act instantaneously in simulation time, and synchronization would be controlled by message passing. If you want to give control of the simulation to external software, then why not use cocotb? I have never used it before, but it seems like a much better option if you want to control the simulation externally. Ultimately, I think it would be better to avoid replicating features of cocotb and instead aim to offer a fundamentally different alternative. There are advantages and disadvantages to cocotb, and I think the VUnit co-simulation I describe could fit in as a new option with its own set of tradeoffs.

  1. Simulator support

I wasn't aware that other simulators supported VHPIDIRECT. Is that true? In fact, I originally thought VHPIDIRECT was Tristan's own, half-measure implementation of VHPI. However, if other simulators support VHPIDIRECT, then we should definitely support them. Alternatively, if we can accomplish the same goals with VPI and broaden the base of support, then that is an option too. I'm not well-versed in the differences, so I can't speak to that argument.

  1. VUnit API

I would like to clarify my vision for the VUnit co-simulation API and also offer some solutions to the issues you mentioned here and in #583.

First, I would like to point out that the VUnit VHDL libraries were written to work within several contraints of the VHDL language. The implementations are not ideal, but they work well enough. However, as a result, we must keep in mind those details when extending VUnit to other languages.

For example, the string_ptr_pkg and integer_vector_ptr_pkg data structures use a constant integer to avoid limitations placed on shared variables in VHDL. Thus, any implementation of ext_ptr.c MUST comply with this constraint. However, given this constraint, there is really only one sensible structure for storing references to shared memroy. At some point, you have to translate that constant integer into a pointer/access type, and the best way to do that is use it as an index into an array of pointers.

My point is that I don't think it would ever make sense for users to define their own implementation of ext_ptr.c, because you have to know and understand the constraints from VHDL. Similar arguments apply for the com library. Really, the VHPIDIRECT functions exist only to cross the bridge to the outside world. Consequently, I don't think we should treat the VHPIDIRECT functions as the API; rather, I think we should treat the C functions as the API. We can provide a generic API in C, and we can let users build off of it. We can also define our own "default" extensions, e.g. an extension that connects ext_ptr.c with the Python shared memory module.

However, you correctly note that we should not let the constraints of VHDL limit us when defining the C API. To that end, I think we can make some minor modifications to improve ext_ptr.c. You gave an example in #583 where you wanted to share an existing pointer with VHDL. Currently, you would have to allocate a new ext_ptr and then copy the data in. However, we could easily change the ptr_new and ptr_reallocate functions in the C API to accept a copy argument. If copy is true, it will allocate a new buffer and copy the data. But if copy is false, then it will inject the supplied value pointer directly into the pointer storage data structure instead. Would that satisfy your needs?

I would also like to note that we can make the bare pointers from C available in VHDL as well. If you look at my code, you will see that I was able to reverse-engineer the "fat pointer" for a VHDL line, and I'm sure I can do the same for integer_vector_access_t. Based on the line data structure, I think the solution would be portable, but I'm not 100% sure. Tristan does warn against using the fat pointers, so maybe he has a good reason for it. Furthermore, maybe this is all a moot point if we switch to VPI.

  1. Progress

It would not take me much effort to complete 1, 2 and 3 in the list above. However, I'm pretty short on time between now and the end of the year. I'm eager to work on it, but it still might be a while. However, maybe that's a good thing, since there may be more details to iron out.

@umarcor
Copy link
Member

umarcor commented Dec 5, 2019

I realized afterwards that I tried to cram in WAY too much information

According to your description, it seems that you tried to fit (at least) three presentations in a single one. That never works. But I understand you perfectly. When you have so much info that you are excited about and you are willing to share it is so difficult to refrain yourself...

no one is sure how to integrate VUnit

I guess that a possible approach is to replace a makefile or testsuite.sh script of one of the smallest test suites with a run.py. The main issue you might find is the naming of testbenches, and the requirement to add a generic. My suggestion is to forget about all the VHDL libraries for now; get the tool into the codebase and into their workstations first. Then, as you get to know about issues they are having, just comment how you use some specific lib/feature. In fact, I started using VUnit because it allows to change between ModelSim/QuestaSim and GHDL with an environment variable. No need to fight with separate build/execution scripts and interfaces. I discovered VHDL libs much later.


  1. Linking

but I'm not sure how best to integrate it with VUnit. I'll probably leave that to you and @kraiger to figure out.

I think we should discuss this after all the packages and tests are ready to be used as an example. Considering previous discussions and the plan to split verification components to separate repositories (either in the org or in third-party namespaces), I foresee that most of the VHPIDIRECT related code might end in a sibling repo, say vunit/cosim. This is mostly because @LarsAsplund, @kraigher are having a hard time keeping up with all this work. I think it can be easier to maintain in the long term if we move all the C related sources away (including the ones that I already contributed to the codebase).

My proposal is to implement all the changes that you suggest, and to keep all the VHDL sources which are not ext_* or ext-* in VUnit's codebase. These should include the logic to call external functions, but the ones in the codebase would be the *-novhpi* versions: empty bodies with an assertion of severity failure. Bear with me:

  • Those files compose what we call "VUnit's external VHDL API".
  • In vunit/cosim, we provide a specific implementation based on that API: "a VHPIDIRECT implementation of VUnit's external VHDL API in C"; initially targeting GHDL but extensible to other simulators. This is composed of:
    • The VHDL files that we now call ext*.
    • The C sources, headers, grt.ver files, etc.
    • Build scripts (bash, makefiles, python, whatever...) to generate objects with GCC.
    • A "VUnit plugin" in Python. The functionality of this plugin would be equivalent to the current external argument of add_builtins: vu.add_builtins({"string": True, "integer": True}). Optionally, the features in the previous bulletpoint can be merge into this plugin.

A user would install Python packages vunit_hdl and vunit_cosim:

from vunit import VUnit
from vunit_cosim import VHPIDIRECT

vu = VUnit.from_argv(vhdl_standard="2008", compile_builtins=False)
vu.add_builtins(VHPIDIRECT({
    "string": True,
    "integer": True,
}))

Advantages of this approach:

  • Users that don't want to use external features will get a cleaner codebase, changelog, etc.
  • @LarsAsplund, @kraigher can focus on modifications to the internal VHDL libs, which they have most knowledge about.
  • I'd say that no modifications are required in the codebase to support such a plugin. However, it would be required if another field is added (say "queue": True). Nevertheless, this would be required anyway.
  • "VUnit's external VHDL API" is not limited to cosimulation. Anyone can provide an external implementation of the built-in types in VHDL. We talk about C and Python because that's the target, but someone might come up with some weird and interesting use case if we make it explicit. BTW, this is already the possible with the current implementation.
  1. VHPIDIRECT vs. VPI

I haven't looked into VPI, but VHPIDIRECT seems sufficient for what I was envisioning.

I agree. The main difference between VUnit/VHPIDIRECT and cocotb/VPI is the focus on VHDL or Python, respectively. The concept of "external VHDL API" is interesting precisely because someone might want to plug VUnit's queues to Python by hijacking the body of the default "dummy" templates with VPI. This would apply to verification components too. There is no need to duplicate them in Python if cocotb allows to manage VUnit's VHDL components.

I think we should be delibrate when deciding the role of VUnit in co-simulation, especially with respect to the broader co-simulation community. Personally, in my vision for VUnit co-simulation, I had imagined that VHDL would handle all aspects of simulation control flow.

I think that VUnit itself should play no role in the co-simulation. Does VUnit provide any feature to interact with the simulator at runtime? I vaguely remember that some features might be added to the TCL shell of some simulators, but I am unsure about it. Anyway, if we want to provide a Python library to handle actors, queues, etc. that interacts with vunit/cosim's VHPIDIRECT implementation, I think that it should belong to vunit/cosim.

However, regarding the management of the control flow, I think that we should not constrain it. GHDL needs to run freely and to execute the callbacks when it needs to. That's the low-level point of view. From an "application flow" perspective, the one that decides when to provide input data and when to read the output is the master. Hence, it can be the VHDL testbench, the (optional) C wrapper provided by the user, a Python script, Octave, a RPC framework... we shouldn't care.

External actors would always act instantaneously in simulation time, and synchronization would be controlled by message passing.

The first part is correct, but I think that the second one is optional. It is up to the user to use a solution such as clock gating for other types of synchronization.

If you want to give control of the simulation to external software, then why not use cocotb? I have never used it before, but it seems like a much better option if you want to control the simulation externally.

I might not want to control the execution from Python, or to use Python at all. VUnit and GHDL allow to generate a binary that you can later use/call from any other tool. I might just want to use GHDL as a function in Octave, to use the workspace to allocate, build and process test data. I believe that cocotb provides access to every and each signal in the design, but it needs the Python interface at runtime. Actually, that's the main feature: https://cocotb.readthedocs.io/en/latest/introduction.html#overview

The question should not be about who controls the execution flow, but about what data you want to share. Users that want to share/transfer test data/messages will find VUnit to be a better fit (it is a test runner). Users that want to inspect specific signals in the design to do fault injection or any other targeted test, will find cocotb to be better (because it is a Python based VPI client).

I think the VUnit co-simulation I describe could fit in as a new option with its own set of tradeoffs.

That's it.

  1. Simulator support

I wasn't aware that other simulators supported VHPIDIRECT. Is that true? In fact, I originally thought VHPIDIRECT was Tristan's own, half-measure implementation of VHPI.

The reserved word VHPIDIRECT is defined in the standard. For example section "20.2.4.3 Standard direct binding" of the LRM 2008, as opposed to keyword VHPI that is used for "20.2.4.2 standard indirect binding".

However, I think that the implmentation is a subset of the subset with Tristan's touch.

Regarding other simulators, if you search for examples using QuestaSim/ModelSim's Foreign Language Interface (FLI), you will be surprised:

  1. VUnit API

I don't think we should treat the VHPIDIRECT functions as the API; rather, I think we should treat the C functions as the API.

As explained above, I think that those are two tightly related APIs. FLI users can probably share the VHDL API and provide an alternative implementation of the C API.

We can also define our own "default" extensions, e.g. an extension that connects ext_ptr.c with the Python shared memory module.

Yes, I think that this fits in:

- vunit/cosim
| - src/
| | - vhpidirect/
|   | - vhdl/
|   | - c/
|   | - py/     <- HERE
|   - fli/
|   | - vhdl/
|   | - c/
| - test/
| - .git/
| - .github/

However, we could easily change the ptr_new and ptr_reallocate functions in the C API to accept a copy argument. If copy is true, it will allocate a new buffer and copy the data. But if copy is false, then it will inject the supplied value pointer directly into the pointer storage data structure instead. Would that satisfy your needs?

The main point with sharing pointers is precisely to avoid copying the data. The current approach is to use functions to cast data in and out of a char *, because an array of bytes is the lower level type that we will be share with any other tool. This was the motivation to add external modes to VUnit's internal types. If we are going to force data to be copied, we would be back at the starting point. Please, see the initial comment that triggered all my set of PRs: #462. I'm all in with improving the queue system and providing a message passing mechanism to/from foreign languages, even if all my contributions need to be rewritten or dropped. However, I think that we should preserve the features that were considered. This is related to proposing to support hooking the storage mechanism in C. It's ok if this feature is supported by extending the default solution, as long as completely forking it is not necessary.

I would also like to note that we can make the bare pointers from C available in VHDL as well. If you look at my code, you will see that I was able to reverse-engineer the "fat pointer" for a VHDL line, and I'm sure I can do the same for integer_vector_access_t. Based on the line data structure, I think the solution would be portable, but I'm not 100% sure. Tristan does warn against using the fat pointers, so maybe he has a good reason for it.

I think that this can be very useful for GHDL + Xyce, because there is a C API already, and it'd be so handy to add the VHDL only. I think that Tristan warns against it because it depends on some implementation detail of GHDL that is subject to change without prior notice. However, it might be safe to use otherwise.

  1. Progress

It's been 7 months since these enhancements started, and it is likely to take at least 2-3 months. What do you think about writing some task list/tree as a kind of roadmap of the features/enhancements that we expect to work on? That will allow us to focus on specific task when we have time, and it will also help other people understand (and hopefully engage). We might use a wiki page or a gist.

@LarsAsplund
Copy link
Collaborator

@bradleyharden

I think people saw value in VUnit, but I think it was like drinking from a fire hose.

I've been there. Developers not used to creating self-checking testbenches may walk away from a presentation just being interested in a simple script for incremental compilation. But that's a start and it's actually well worth the time spent on a presentation.

Personally, I haven't worked with the LEON

Me neither but I did spend a few minutes trying to compile it to see how far I could get. If I remember correctly I found that some files are generated so the plain text VHDL files are not enough to pass compilation

@LarsAsplund
Copy link
Collaborator

LarsAsplund commented Dec 5, 2019

@bradleyharden

Personally, in my vision for VUnit co-simulation, I had imagined that VHDL would handle all aspects of simulation control flow. External actors would always act instantaneously in simulation time, and synchronization would be controlled by message passing.

This is pretty much the way I see it as well. Our target audience all know VHDL and for the majority it is their main language. While they may recognize that VHDL has its limitations they rarely think that starting from the beginning and do everything in another language like Python, Scala or even SystemVerilog is the way forward. Individuals certainly do some experiments but it's a long way from that to changing the way of entire companies. Things will probably change somehow in the future but that change is very slow

@LarsAsplund
Copy link
Collaborator

LarsAsplund commented Dec 5, 2019

What do you think about writing some task list/tree as a kind of roadmap of the features/enhancements that we expect to work on?

This is also something I think we need, There is a lot of work going on, information spread over many issues with dependencies. What I'd like is a map showing the end use cases on one side and our prototypes with instructions how to replicate them on the other side. In between are the tasks that takes us from one side to the other. Boxes and arrows showing functional growth and dependencies

The goals should be demos focusing on showing the mechanism. It doesn't have to be a cool application. Hello World type of examples are good enough. Some examples of demonstrators:

  • VHDL sharing memory with a SW application using shared memory models
  • Python bindings to do things like using a reg exp in your simulation
  • Use message passing/queues to interact with an external verification component, for example Octave/Matlab "monitor" plotting transactions/samples rather than using simple text logging.
  • Performing a multicore simulation by passing data between simulators using VUnit queues

@umarcor
Copy link
Member

umarcor commented Dec 7, 2019

I would also like to note that we can make the bare pointers from C available in VHDL as well. If you look at my code, you will see that I was able to reverse-engineer the "fat pointer" for a VHDL line, and I'm sure I can do the same for integer_vector_access_t. Based on the line data structure, I think the solution would be portable, but I'm not 100% sure. Tristan does warn against using the fat pointers, so maybe he has a good reason for it.

@bradleyharden, I asked Tristan in ghdl/ghdl#1052 (comment). See ghdl/ghdl#1053 also.

@umarcor
Copy link
Member

umarcor commented Dec 9, 2019

Work in progress to move cosimulation sources to a separate repo:

@LarsAsplund
Copy link
Collaborator

@bradleyharden @umarcor One way to create that map I was talking about is to use user story mapping. I've created a start here. If you hand me an email address I should be able to add you as editors to this map.

@bradleyharden
Copy link
Contributor Author

@umarcor and @LarsAsplund, I haven't had a ton of time to work on this, but I have made some progress recently. I've done a bit more prototyping, and I've encountered some design decisions that need to be made.

This post might end up being overly pedantic, but when I make design decisions, I like to trace my rationale from first principles. I thought it would be best to do that here, so that others can provide input as well.

Shared memory storage format

The first issue concerns the storage format for memory shared between VHDL and C. The implementation details of GHDL's VHPIDIRECT interface place some constraints on what you can and can't do with shared memory in both VHDL and C. I'd like to review the tradeoffs and get feedback on the deicison.

To my knowledge, all available foreign language interfaces for VHDL (VPI, FLI & VHPI) allow you to call foreign functions from VHDL, but they do not allow you to call VHDL functions from foreign languages. System Verilog's DPI does allow you to call HDL functions from C code, but that's not an option for us. As a consequence, if you want to share memory between VHDL and C, you must create and manage the shared memory in C, and then expose C functions to VHDL.

In VHDL, we can allow access to the shared memory in a few different ways:

  1. Provide functions that get/set fixed-length segments from shared memory (e.g. functions that index into the shared memory and get/set a single byte or integer)
  2. Provide functions that get/set unconstrained arrays
  3. Provide functions that return VHDL access types pointing to the shared memory

So far, I have assumed we would implement a VUnit cosimulation library with GHDL's VHPIDIRECT interface. I have not thoroughly examined all the available foreign language interfaces, so I'm not 100% confident in this decision. However, from what I've seen, VPI, FLI and VHPI are all implemented similarly, and VHPIDIRECT seems like the most "VHDL-native" option that is also available in a free simulator.

Ideally, we would implement all three of the above methods for shared memory access. However, given the implementation details of VHPIDIRECT, # 3 cannot be implemented without placing additional constraints on the C implementation. This is the crux of the design decision.

I doubt that @LarsAsplund has been closely following the discussions between Tristan, @umarcor and myself, so I'd like to explain a bit.

In VHPIDIRECT, unconstrained arrays are represented in C using the following data structure:

typedef struct {
  range_t *range;
  void    *value;
} array_t;

where range_t is defined as

typedef struct {
  uint32_t left;
  uint32_t right;
  uint32_t dir;
  uint32_t length;
} range_t;

On the other hand, the VHPIDIRECT data structure for an access type looks like this

typedef struct {
  range_t range;
  uint8_t value[];
} access_t;

In an unconstrained array, range and value are stored in separate memory blocks, but in an access type, range and value are concatenated in the same memory block.

As a result, if we want to manipulate shared memory blocks through VHDL access types, we must always allocate space for range_t at the beginning of each block. That is simple enough to implement, but it means that shared memory blocks must be allocated with that in mind. Using this approach, it would not be possible for users to take a memory block allocated elsewhere (e.g. a NumPy or Octave array) and share it with VHDL directly. In those cases, the data would first have to be copied into a newly allocated block that is prepended with range_t.

Alternatively, we could exlude the use of VHDL access types. That would allow users to share any arbitrary memory block. But it would not necessarily reduce the amount of copying.

In VHPIDIRECT, when returning an unconstrained array from C to VHDL, GHDL will always copy the data from the C memory space to the VHDL memory space, so that it can deallocate the unconstrained array when it goes out of scope. However, when returning an access type from C to VHDL, the pointer is used directly and no data is copied.

I think the access type method offers some big advantages, because it provides flexibility in VHDL, which is often hard to come by. With access types, you can directly manipulate the shared memory using VHDL array syntax, allowing you to easily get and set slices. You can also view the entire array at once, without copying it, using .all. However, if you return the array using .all, I assume that GHDL will make a copy.

I'm not quite sure which approach is better overall, so I thought I would get your opinions.

I have other issues that I would like to discuss, but they are not directly related to this issue. I think this topic should at least get us started for now.

@GlenNicholls
Copy link
Contributor

What do you think about writing some task list/tree as a kind of roadmap of the features/enhancements that we expect to work on?

This is also something I think we need, There is a lot of work going on, information spread over many issues with dependencies. What I'd like is a map showing the end use cases on one side and our prototypes with instructions how to replicate them on the other side. In between are the tasks that takes us from one side to the other. Boxes and arrows showing functional growth and dependencies

The goals should be demos focusing on showing the mechanism. It doesn't have to be a cool application. Hello World type of examples are good enough. Some examples of demonstrators:

  • VHDL sharing memory with a SW application using shared memory models
  • Python bindings to do things like using a reg exp in your simulation
  • Use message passing/queues to interact with an external verification component, for example Octave/Matlab "monitor" plotting transactions/samples rather than using simple text logging.
  • Performing a multicore simulation by passing data between simulators using VUnit queues

@LarsAsplund I think this is needed. One of the things I struggled with when learning VUnit is beyond the simple examples, the only way to ask questions on Gitter and someone reponds with a more advanced feature or reading through the code and figuring out the underlying details behind the framework. I think that having these types of examples would be really good to add more advanced features/use-cases to the documentation. I have nothing to provide for this conversation, but I wanted to champion that aspect of the discussion as at my company, the VUnit documentation is the largest reason for the lack of integration. We find a lot of value, but I'm really the only one with the knowledge and I don't know many people here who are willing to dive into the code to learn the more advanced features. Because of this, as you said, many of my colleagues are only interested in quick automated checks along with incremental compilation. Other than that, I haven't seen any desire to continue learning the advanced features, especially with additions like cosimulation

@LarsAsplund
Copy link
Collaborator

@GlenNicholls We should probably move this discussion elsewhere. As a start, could you please make a list of features that you think would provide most value to your colleagues if they were properly understood through examples and documentation. Put it on Gitter as a start.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CoSim ThirdParty: cocotb Related to cocotb.
Projects
None yet
Development

No branches or pull requests

6 participants