Skip to content

Commit

Permalink
Release v0.9.1
Browse files Browse the repository at this point in the history
Signed-off-by: The Sionna Team <[email protected]>
  • Loading branch information
gmarcusm committed May 31, 2022
1 parent b6fd9c5 commit 488e6c3
Show file tree
Hide file tree
Showing 14 changed files with 171 additions and 158 deletions.
1 change: 1 addition & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -1 +1,2 @@
.gitattributes merge=ours
.gitlab-ci.yml merge=ours
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ On macOS, you need to install [tensorflow-macos](https://github.com/apple/tensor
```
>>> import sionna
>>> print(sionna.__version__)
0.9.0
0.9.1
```

3.) Once Sionna is installed, you can run the [Sionna "Hello, World!" example](https://nvlabs.github.io/sionna/examples/Hello_World.html), have a look at the [quick start guide](https://nvlabs.github.io/sionna/quickstart.html), or at the [tutorials](https://nvlabs.github.io/sionna/tutorials.html).
Expand Down
1 change: 1 addition & 0 deletions doc/source/api/ofdm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ The following code snippet shows how to setup and visualize an instance of
rg = ResourceGrid(num_ofdm_symbols = 14,
fft_size = 64,
subcarrier_spacing = 30e3,
num_tx = 1,
num_streams_per_tx = 1,
num_guard_carriers = [5, 6],
Expand Down
4 changes: 2 additions & 2 deletions doc/source/installation.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ e.g., using `conda <https://docs.conda.io>`_. On macOS, you need to install `ten
>>> import sionna
>>> print(sionna.__version__)
0.9.0
0.9.1
3.) Once Sionna is installed, you can run the `Sionna "Hello, World!" example <https://nvlabs.github.io/sionna/examples/Hello_World.html>`_, have a look at the `quick start guide <https://nvlabs.github.io/sionna/quickstart.html>`_, or at the `tutorials <https://nvlabs.github.io/sionna/tutorials.html>`_.

Expand Down Expand Up @@ -109,4 +109,4 @@ e.g., using `conda <https://docs.conda.io>`_.
>>> import sionna
>>> print(sionna.__version__)
0.9.0
0.9.1
2 changes: 1 addition & 1 deletion examples/5G_Channel_Coding_Polar_vs_LDPC_Codes.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
"source": [
"# 5G Channel Coding and Rate-Matching: Polar vs. LDPC Codes\n",
"\n",
"*\"For block lengths of about 500, an IBM 7090 computer requires about 0.1 seconds per iteration to decode a block by probabilistic decoding scheme. Consequently, many hours of computation time are necessary to evaluate even a* $P(e)$ *in the order of* ${10^{-4}}$ *.\"* Robert G. Gallager, 1974 [7]\n",
"*\"For block lengths of about 500, an IBM 7090 computer requires about 0.1 seconds per iteration to decode a block by probabilistic decoding scheme. Consequently, many hours of computation time are necessary to evaluate even a* $P(e)$ *in the order of* ${10^{-4}}$ *.\"* Robert G. Gallager, 1963 [7]\n",
"\n",
"In this notebook, you will learn about the different coding schemes in 5G NR and how rate-matching works (cf. 3GPP TS 38.212 [3]).\n",
"The coding schemes are compared under different length/rate settings and for different decoders.\n",
Expand Down
130 changes: 69 additions & 61 deletions examples/Weighted_BP_Algorithm.ipynb

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion sionna/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"""This is the Sionna library.
"""

__version__ = '0.9.0'
__version__ = '0.9.1'

from . import utils
from .constants import *
Expand Down
2 changes: 1 addition & 1 deletion sionna/channel/flat_fading_channel.py
Original file line number Diff line number Diff line change
Expand Up @@ -109,7 +109,7 @@ class ApplyFlatFadingChannel(tf.keras.layers.Layer):
Tensor of channel realizations. Will be broadcast to the
dimensions of ``x`` if needed.
no : Scalar of Tensor, tf.float
no : Scalar or Tensor, tf.float
The noise power ``no`` is per complex dimension.
Only required if ``add_awgn==True``.
Will be broadcast to the shape of ``y``.
Expand Down
38 changes: 19 additions & 19 deletions sionna/channel/tr38901/cdl.py
Original file line number Diff line number Diff line change
Expand Up @@ -72,25 +72,25 @@ class CDL(ChannelModel):
The following tables from [TR38901]_ provide typical values for the delay
spread.
+--------------------------+-----------------+
| Model | Delay spread |
+==========================+=================+
| Very short delay spread | :math:`10` ns |
+--------------------------+-----------------+
| Short short delay spread | :math:`10` ns |
+--------------------------+-----------------+
| Nominal delay spread | :math:`100` ns |
+--------------------------+-----------------+
| Long delay spread | :math:`300` ns |
+--------------------------+-----------------+
| Very long delay spread | :math:`1000` ns |
+--------------------------+-----------------+
+-----------------------------------------------+-----------------------------------------+
| Delay spread [ns] | |
| | Frequency [GHz] |
| | |
+------------------------+----------------------+------+------+----+-----+-----+----+-----+
+--------------------------+-------------------+
| Model | Delay spread [ns] |
+==========================+===================+
| Very short delay spread | :math:`10` |
+--------------------------+-------------------+
| Short short delay spread | :math:`10` |
+--------------------------+-------------------+
| Nominal delay spread | :math:`100` |
+--------------------------+-------------------+
| Long delay spread | :math:`300` |
+--------------------------+-------------------+
| Very long delay spread | :math:`1000` |
+--------------------------+-------------------+
+-----------------------------------------------+------+------+----------+-----+----+-----+
| Delay spread [ns] | Frequency [GHz] |
+ +------+------+----+-----+-----+----+-----+
| | 2 | 6 | 15 | 28 | 39 | 60 | 70 |
+========================+======================+======+======+====+=====+=====+====+=====+
| Indoor office | Short delay profile | 20 | 16 | 16 | 16 | 16 | 16 | 16 |
| +----------------------+------+------+----+-----+-----+----+-----+
| | Normal delay profile | 39 | 30 | 24 | 20 | 18 | 16 | 16 |
Expand Down
38 changes: 19 additions & 19 deletions sionna/channel/tr38901/tdl.py
Original file line number Diff line number Diff line change
Expand Up @@ -57,25 +57,25 @@ class TDL(ChannelModel):
The following tables from [TR38901]_ provide typical values for the delay
spread.
+--------------------------+-----------------+
| Model | Delay spread |
+==========================+=================+
| Very short delay spread | :math:`10` ns |
+--------------------------+-----------------+
| Short short delay spread | :math:`10` ns |
+--------------------------+-----------------+
| Nominal delay spread | :math:`100` ns |
+--------------------------+-----------------+
| Long delay spread | :math:`300` ns |
+--------------------------+-----------------+
| Very long delay spread | :math:`1000` ns |
+--------------------------+-----------------+
+-----------------------------------------------+-----------------------------------------+
| Delay spread [ns] | |
| | Frequency [GHz] |
| | |
+------------------------+----------------------+------+------+----+-----+-----+----+-----+
+--------------------------+-------------------+
| Model | Delay spread [ns] |
+==========================+===================+
| Very short delay spread | :math:`10` |
+--------------------------+-------------------+
| Short short delay spread | :math:`10` |
+--------------------------+-------------------+
| Nominal delay spread | :math:`100` |
+--------------------------+-------------------+
| Long delay spread | :math:`300` |
+--------------------------+-------------------+
| Very long delay spread | :math:`1000` |
+--------------------------+-------------------+
+-----------------------------------------------+------+------+----------+-----+----+-----+
| Delay spread [ns] | Frequency [GHz] |
+ +------+------+----+-----+-----+----+-----+
| | 2 | 6 | 15 | 28 | 39 | 60 | 70 |
+========================+======================+======+======+====+=====+=====+====+=====+
| Indoor office | Short delay profile | 20 | 16 | 16 | 16 | 16 | 16 | 16 |
| +----------------------+------+------+----+-----+-----+----+-----+
| | Normal delay profile | 39 | 30 | 24 | 20 | 18 | 16 | 16 |
Expand Down
5 changes: 2 additions & 3 deletions sionna/fec/interleaving.py
Original file line number Diff line number Diff line change
Expand Up @@ -476,7 +476,7 @@ def call_inverse(self, inputs):

# use seed if explicit seed is provided
if seed is not None:
seed = (tf.constant(1337), tf.constant(seed))
seed = (tf.constant(1337), tf.cast(seed, tf.int32))
elif self._keep_state:
# use sequence as defined by seed
seed = self._seed
Expand Down Expand Up @@ -603,8 +603,7 @@ def call(self, inputs):

# use seed if explicit seed is provided
if seed is not None:
#assert isinstance(seed, int), "seed must be int."
seed = (tf.constant(1337), tf.constant(seed))
seed = (tf.constant(1337), tf.cast(seed, tf.int32))
# only generate a new random sequence if keep_state==False
elif self._keep_state:
# use sequence as defined by seed
Expand Down
99 changes: 51 additions & 48 deletions sionna/signal/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,18 +67,10 @@ def convolve(inp, ker, padding='full', axis=-1):
# Reshape the input to a 2D tensor
batch_shape = tf.shape(inp)[:-1]
inp_len = tf.shape(inp)[-1]
inp_dtype = inp.dtype
ker_dtype = ker.dtype
inp = tf.reshape(inp, [-1, inp_len])

# If one of `inp` or `ker` is complex-valued, then the output
# is complex-valued. Otherwise, the output is real-valued.
if inp.dtype.is_complex and ker.dtype.is_floating:
ker = tf.complex(ker, tf.zeros_like(ker))
elif inp.dtype.is_floating and ker.dtype.is_complex:
inp = tf.complex(inp, tf.zeros_like(inp))
# We need to know if we will need to compute the imaginary-component
# of the output
complex_output = bool(inp.dtype.is_complex or ker.dtype.is_complex)

# Using Tensorflow convolution implementation, we need to manually flip
# the kernel
ker = tf.reverse(ker, axis=(0,))
Expand All @@ -88,48 +80,59 @@ def convolve(inp, ker, padding='full', axis=-1):
# Tensorflow convolution expects a channel dim for the convolution
inp = tf.expand_dims(inp, axis=-1)

# Pad the kernel or input if required depending on the convolution type.
# Also, set the padding-mode for TF convolution
if padding == 'valid':
if complex_output:
inp_real = tf.math.real(inp)
inp_imag = tf.math.imag(inp)
ker_real = tf.math.real(ker)
ker_imag = tf.math.imag(ker)
out_1 = tf.nn.convolution(inp_real, ker_real, padding='VALID')
out_2 = tf.nn.convolution(inp_imag, ker_imag, padding='VALID')
out_3 = tf.nn.convolution(inp_real, ker_imag, padding='VALID')
out_4 = tf.nn.convolution(inp_imag, ker_real, padding='VALID')
out = tf.complex(out_1 - out_2,
out_3 + out_4)
# No padding required in this case
tf_conv_mode = 'VALID'
elif padding == 'same':
ker = tf.pad(ker, [[0,1],[0,0],[0,0]])
tf_conv_mode = 'SAME'
elif padding == 'full':
ker_len = ker.shape[0] #tf.shape(ker)[0]
if (ker_len % 2) == 0:
extra_padding_left = ker_len // 2
extra_padding_right = extra_padding_left-1
else:
out = tf.nn.convolution(inp, ker, padding='VALID')
extra_padding_left = (ker_len-1) // 2
extra_padding_right = extra_padding_left
inp = tf.pad(inp, [[0,0],
[extra_padding_left,extra_padding_right],
[0,0]])
tf_conv_mode = 'SAME'

# Extract the real and imaginary components of the input and kernel
inp_real = tf.math.real(inp)
ker_real = tf.math.real(ker)
inp_imag = tf.math.imag(inp)
ker_imag = tf.math.imag(ker)

# Compute convolution
# The output is complex-valued if the input or the kernel is.
# Defaults to False, and set to True if required later
complex_output = False
out_1 = tf.nn.convolution(inp_real, ker_real, padding=tf_conv_mode)
if inp_dtype.is_complex:
out_4 = tf.nn.convolution(inp_imag, ker_real, padding=tf_conv_mode)
complex_output = True
else:
if padding == 'same':
ker = tf.pad(ker, [[0,1],[0,0],[0,0]])
elif padding == 'full':
ker_len = tf.shape(ker)[0]
if tf.equal(tf.math.floormod(ker_len,2), 0):
extra_padding_left = ker_len // 2
extra_padding_right = extra_padding_left-1
else:
extra_padding_left = (ker_len-1) // 2
extra_padding_right = extra_padding_left
inp = tf.pad(inp, [[0,0],
[extra_padding_left,extra_padding_right],
[0,0]])
if complex_output:
inp_real = tf.math.real(inp)
inp_imag = tf.math.imag(inp)
ker_real = tf.math.real(ker)
ker_imag = tf.math.imag(ker)
out_1 = tf.nn.convolution(inp_real, ker_real, padding='SAME')
out_2 = tf.nn.convolution(inp_imag, ker_imag, padding='SAME')
out_3 = tf.nn.convolution(inp_real, ker_imag, padding='SAME')
out_4 = tf.nn.convolution(inp_imag, ker_real, padding='SAME')
out = tf.complex(out_1 - out_2,
out_3 + out_4)
else:
out = tf.nn.convolution(inp, ker, padding='SAME')
out_4 = tf.zeros_like(out_1)
if ker_dtype.is_complex:
out_3 = tf.nn.convolution(inp_real, ker_imag, padding=tf_conv_mode)
complex_output = True
else:
out_3 = tf.zeros_like(out_1)
if inp_dtype.is_complex and ker.dtype.is_complex:
out_2 = tf.nn.convolution(inp_imag, ker_imag, padding=tf_conv_mode)
else:
out_2 = tf.zeros_like(out_1)
if complex_output:
out = tf.complex(out_1 - out_2,
out_3 + out_4)
else:
out = out_1

# Reshape the output to the expected shape
out = tf.squeeze(out, axis=-1)
out_len = tf.shape(out)[-1]
out = tf.reshape(out, tf.concat([batch_shape, [out_len]], axis=-1))
Expand Down
4 changes: 2 additions & 2 deletions sionna/utils/metrics.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ def result(self):
return tf.cast(tf.math.divide_no_nan(self.bmi, self.counter),
dtype=tf.float32)

def reset_states(self):
def reset_state(self):
self.bmi.assign(0.0)
self.counter.assign(0.0)

Expand Down Expand Up @@ -91,7 +91,7 @@ def result(self):
return tf.cast(tf.math.divide_no_nan(self.ber, self.counter),
dtype=tf.float32)

def reset_states(self):
def reset_state(self):
self.ber.assign(0.0)
self.counter.assign(0.0)

Expand Down
1 change: 1 addition & 0 deletions test/test_ofdm.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@
import unittest
import numpy as np
import tensorflow as tf

gpus = tf.config.list_physical_devices('GPU')
print('Number of GPUs available :', len(gpus))
if gpus:
Expand Down

0 comments on commit 488e6c3

Please sign in to comment.