You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current Verilog maths and Python fixed-point test modules choose different values in some cases where the inputs can't be precisely represented in binary.
We should output the model a and b values and investigate the difference.
Fixing this issue would allow us to run random tests against the maths modules.
Two new tests show this problem:
# test fails - model and DUT choose different sides of true value@cocotb.test(expect_fail=True)asyncdefnonbin_4(dut):
"""Test 3.6/0.6"""awaittest_dut_divide(dut=dut, a=3.6, b=0.6)
# test fails - model and DUT choose different sides of true value@cocotb.test(expect_fail=True)asyncdefnonbin_5(dut):
"""Test 0.4/0.1"""awaittest_dut_divide(dut=dut, a=0.4, b=0.1)
666.03ns INFO cocotb.regression running nonbin_4 (30/37)
Test 3.6/0.6
688.03ns INFO cocotb.div dut a: 000111001
688.03ns INFO cocotb.div dut b: 000001001
688.03ns INFO cocotb.div dut val: 001100101
688.03ns INFO cocotb.div 6.3125
688.03ns INFO cocotb.div model val: 000101.1101
688.03ns INFO cocotb.div 5.8125
688.03ns INFO cocotb.regression nonbin_4 passed: failed as expected (result was AssertionError)
688.03ns INFO cocotb.regression running nonbin_5 (31/37)
Test 0.4/0.1
710.03ns INFO cocotb.div dut a: 000000110
710.03ns INFO cocotb.div dut b: 000000001
710.03ns INFO cocotb.div dut val: 001100000
710.03ns INFO cocotb.div 6
710.03ns INFO cocotb.div model val: 000011.0000
710.03ns INFO cocotb.div 3
710.03ns INFO cocotb.regression nonbin_5 passed: failed as expected (result was AssertionError)
The text was updated successfully, but these errors were encountered:
* Add test with small divisor 1/0.2 (issue #164).
* Use fixed-point for GTKWave display.
* Improve model division.
* Add tests for values that can't be precisely represented in binary. See issue #167.
The current Verilog maths and Python fixed-point test modules choose different values in some cases where the inputs can't be precisely represented in binary.
We should output the model a and b values and investigate the difference.
Fixing this issue would allow us to run random tests against the maths modules.
Two new tests show this problem:
The text was updated successfully, but these errors were encountered: