System Tests
Adaptive Modulation and Coding Tests
The test suite lte-link-adaptation provides system tests recreating a
scenario with a single eNB and a single UE. Different test cases are created
corresponding to different SNR values perceived by the UE. The aim of the test
is to check that in each test case the chosen MCS corresponds to some known
reference values. These reference values are obtained by
re-implementing in Octave (see src/lte/test/reference/lte_amc.m) the
model described in Section Adaptive Modulation and Coding for the calculation of the
spectral efficiency, and determining the corresponding MCS index
by manually looking up the tables in [R1-081483]. The resulting test vector is
represented in Figure Test vector for Adaptive Modulation and Coding.
The MCS which is used by the simulator is measured by
obtaining the tracing output produced by the scheduler after 4ms (this
is needed to account for the initial delay in CQI reporting). The SINR
which is calcualted by the simulator is also obtained using the
LteSinrChunkProcessor interface. The test
passes if both the following conditions are satisfied:
- the SINR calculated by the simulator correspond to the SNR
of the test vector within an absolute tolerance of
;
- the MCS index used by the simulator exactly corresponds to
the one in the test vector.
UE Measurements Tests
The test suite lte-ue-measurements` provides system tests recreating an
inter-cell interference scenario identical of the one defined for lte-interference` test-suite. However, in this test the quantities to be tested are represented by RSRP and RSRQ measurements performed by the UE in two different points of the stack: the source, which is UE PHY layer, and the destination, that is the eNB RRC.
The test vectors are obtained by use of a dedicated octave script
(available in
src/lte/test/reference/lte-ue-measurements.m), which does
the link budget calculations (including interference) corresponding to the topology of each
test case, and outputs the resulting RSRP and RSRQ. The obtained values are then used for checking the correctness of the UE Measurements at PHY layer, while they have to converted according to 3GPP formatting for checking they correctness at eNB RRC level.
Proportional Fair scheduler performance
The test suite lte-pf-ff-mac-scheduler creates different test cases with
a single eNB, using the Proportional Fair (PF) scheduler, and several UEs, all
having the same Radio Bearer specification. The test cases are grouped in two
categories in order to evaluate the performance both in terms of the adaptation
to the channel condition and from a fairness perspective.
In the first category of test cases, the UEs are all placed at the
same distance from the eNB, and hence all placed in order to have the
same SINR. Different test cases are implemented by using a different
SINR value and a different number of UEs. The test consists on
checking that the obtained throughput performance matches with the
known reference throughput up to a given tolerance. The expected
behavior of the PF scheduler when all UEs have the same SNR is that
each UE should get an equal fraction of the throughput obtainable by a
single UE when using all the resources. We calculate the reference
throughput value by dividing the throughput achievable by a single UE
at the given SNR by the total number of UEs.
Let
be the TTI duration,
the transmission
bandwidth configuration in number of RBs,
the modulation and
coding scheme in use at the given SINR and
be the
transport block size as defined in [TS36213]. The reference
throughput
in bit/s achieved by each UE is calculated as

The curves labeled “PF” in Figure fig-lenaThrTestCase1 represent the test values
calculated for the PF scheduler tests of the first category, that we just described.
The second category of tests aims at verifying the fairness of the PF
scheduler in a more realistic simulation scenario where the UEs have a
different SINR (constant for the whole simulation). In these conditions, the PF
scheduler will give to each user a share of the system bandwidth that is
proportional to the capacity achievable by a single user alone considered its
SINR. In detail, let
be the modulation and coding scheme being used by
each UE (which is a deterministic function of the SINR of the UE, and is hence
known in this scenario). Based on the MCS, we determine the achievable
rate
for each user
using the
procedure described in Section~ref{sec:pfs}. We then define the
achievable rate ratio
of each user
as
Let now
be the throughput actually achieved by the UE
, which
is obtained as part of the simulation output. We define the obtained throughput
ratio
of UE
as
The test consists of checking that the following condition is verified:
if so, it means that the throughput obtained by each UE over the whole
simulation matches with the steady-state throughput expected by the PF scheduler
according to the theory. This test can be derived from [Holtzman2000]
as follows. From Section 3 of [Holtzman2000], we know that
where
is a constant. By substituting the above into the
definition of
given previously, we get
which is exactly the expression being used in the test.
Figure Throughput ratio evaluation for the PF scheduler in a scenario
where the UEs have MCS index presents the results obtained in a test case with
UEs
that are located at a distance from the base
station such that they will use respectively the MCS index
. From the figure, we note that, as expected, the obtained throughput is
proportional to the achievable rate. In other words, the PF scheduler assign
more resources to the users that use a higher MCS index.
Blind Average Throughput scheduler performance
Test suites lte-tdbet-ff-mac-scheduler and lte-fdbet-ff-mac-scheduler create different
test cases with a single eNB and several UEs, all having the same Radio Bearer specification.
In the first test case of lte-tdbet-ff-mac-scheduler and lte-fdbet-ff-mac-scheduler,
the UEs are all placed at the same distance from the eNB, and hence all placed in order to
have the same SNR. Different test cases are implemented by using a different SNR value and
a different number of UEs. The test consists on checking that the obtained throughput performance
matches with the known reference throughput up to a given tolerance. In long term, the expected
behavior of both TD-BET and FD-BET when all UEs have the same SNR is that each UE should get an
equal throughput. However, the exact throughput value of TD-BET and FD-BET in this test case is not
the same.
When all UEs have the same SNR, TD-BET can be seen as a specific case of PF where achievable rate equals
to 1. Therefore, the throughput obtained by TD-BET is equal to that of PF. On the other hand, FD-BET performs
very similar to the round robin (RR) scheduler in case of that all UEs have the same SNR and the number of UE( or RBG)
is an integer multiple of the number of RBG( or UE). In this case, FD-BET always allocate the same number of RBGs
to each UE. For example, if eNB has 12 RBGs and there are 6 UEs, then each UE will get 2 RBGs in each TTI.
Or if eNB has 12 RBGs and there are 24 UEs, then each UE will get 2 RBGs per two TTIs. When the number of
UE (RBG) is not an integer multiple of the number of RBG (UE), FD-BET will not follow the RR behavior because
it will assigned different number of RBGs to some UEs, while the throughput of each UE is still the same.
The second category of tests aims at verifying the fairness of the both TD-BET and FD-BET schedulers in a more realistic
simulation scenario where the UEs have a different SNR (constant for the whole simulation). In this case,
both scheduler should give the same amount of averaged throughput to each user.
Specifically, for TD-BET, let
be the fraction of time allocated to user i in total simulation time,
be the the full bandwidth achievable rate for user i and
be the achieved throughput of
user i. Then we have:
In TD-BET, the sum of
for all user equals one. In long term, all UE has the same
so that we replace
by
. Then we have:
Token Band Fair Queue scheduler performance
Test suites lte-fdtbfq-ff-mac-scheduler and lte-tdtbfq-ff-mac-scheduler create different
test cases for testing three key features of TBFQ scheduler: traffic policing, fairness and traffic
balance. Constant Bit Rate UDP traffic is used in both downlink and uplink in all test cases.
The packet interval is set to 1ms to keep the RLC buffer non-empty. Different traffic rate is
achieved by setting different packet size. Specifically, two classes of flows are created in the
testsuites:
- Homogeneous flow: flows with the same token generation rate and packet arrival rate
- Heterogeneous flow: flows with different packet arrival rate, but with the same token generation rate
In test case 1 verifies traffic policing and fairness features for the scenario that all UEs are
placed at the same distance from the eNB. In this case, all Ues have the same SNR value. Different
test cases are implemented by using a different SNR value and a different number of UEs. Because each
flow have the same traffic rate and token generation rate, TBFQ scheduler will guarantee the same
throughput among UEs without the constraint of token generation rate. In addition, the exact value
of UE throughput is depended on the total traffic rate:
- If total traffic rate <= maximum throughput, UE throughput = traffic rate
- If total traffic rate > maximum throughput, UE throughput = maximum throughput / N
Here, N is the number of UE connected to eNodeB. The maximum throughput in this case equals to the rate
that all RBGs are assigned to one UE(e.g., when distance equals 0, maximum throughput is 2196000 byte/sec).
When the traffic rate is smaller than max bandwidth, TBFQ can police the traffic by token generation rate
so that the UE throughput equals its actual traffic rate (token generation rate is set to traffic
generation rate); On the other hand, when total traffic rate is bigger than the max throughput, eNodeB
cannot forward all traffic to UEs. Therefore, in each TTI, TBFQ will allocate all RBGs to one UE due to
the large packets buffered in RLC buffer. When a UE is scheduled in current TTI, its token counter is decreased
so that it will not be scheduled in the next TTI. Because each UE has the same traffic generation rate,
TBFQ will serve each UE in turn and only serve one UE in each TTI (both in TD TBFQ and FD TBFQ).
Therefore, the UE throughput in the second condition equals to the evenly share of maximum throughput.
Test case 2 verifies traffic policing and fairness features for the scenario that each UE is placed at
the different distance from the eNB. In this case, each UE has the different SNR value. Similar to test
case 1, UE throughput in test case 2 is also depended on the total traffic rate but with a different
maximum throughput. Suppose all UEs have a high traffic load. Then the traffic will saturate the RLC buffer
in eNodeB. In each TTI, after selecting one UE with highest metric, TBFQ will allocate all RBGs to this
UE due to the large RLC buffer size. On the other hand, once RLC buffer is saturated, the total throughput
of all UEs cannot increase any more. In addition, as we discussed in test case 1, for homogeneous flows
which have the same t_i and r_i, each UE will achieve the same throughput in long term. Therefore, we
can use the same method in TD BET to calculate the maximum throughput:

Here,
is the maximum throughput.
be the the full bandwidth achievable rate
for user i.
is the number of UE.
When the totol traffic rate is bigger than
, the UE throughput equals to
. Otherwise, UE throughput
equals to its traffic generation rate.
In test case 3, three flows with different traffic rate are created. Token generation rate for each
flow is the same and equals to the average traffic rate of three flows. Because TBFQ use a shared token
bank, tokens contributed by UE with lower traffic load can be utilized by UE with higher traffic load.
In this way, TBFQ can guarantee the traffic rate for each flow. Although we use heterogeneous flow here,
the calculation of maximum throughput is as same as that in test case 2. In calculation max throughput
of test case 2, we assume that all UEs suffer high traffic load so that scheduler always assign all RBGs
to one UE in each TTI. This assumes is also true in heterogeneous flow case. In other words, whether
those flows have the same traffic rate and token generation rate, if their traffic rate is bigger enough,
TBFQ performs as same as it in test case 2. Therefore, the maximum bandwidth in test case 3 is as
same as it in test case 2.
In test case 3, in some flows, token generate rate does not equal to MBR, although all flows are CBR
traffic. This is not accorded with our parameter setting rules. Actually, the traffic balance feature
is used in VBR traffic. Because different UE’s peak rate may occur in different time, TBFQ use shared
token bank to balance the traffic among those VBR traffics. Test case 3 use CBR traffic to verify this
feature. But in the real simulation, it is recommended to set token generation rate to MBR.
Building Propagation Loss Model
The aim of the system test is to verify the integration of the
BuildingPathlossModel with the lte module. The test exploits a set of
three pre calculated losses for generating the expected SINR at the
receiver counting the transmission and the noise powers. These SINR
values are compared with the results obtained from a LTE
simulation that uses the BuildingPathlossModel. The reference loss values are
calculated off-line with an Octave script
(/test/reference/lte_pathloss.m). Each test case passes if the
reference loss value is equal to the value calculated by the simulator
within a tolerance of
dB, which accouns for numerical
errors in the calculations.
Physical Error Model
The test suite lte-phy-error-model generates different test cases for evaluating both data and control error models. For what concern the data, the test consists of nine test cases with single eNB and a various number of UEs, all having the same Radio Bearer specification. Each test is designed for evaluating the error rate perceived by a specific TB size in order to verify that it corresponds to the expected values according to the BLER generated for CB size analog to the TB size. This means that, for instance, the test will check that the performance of a TB of
bits is analogous to the one of a a CB size of
bits by collecting the performance of a user which has been forced the generation of a such TB size according to the distance to eNB. In order to significantly test the BLER at MAC level, we configured the Adaptive Modulation and Coding (AMC) module, the LteAmc class, for making it less robust to channel conditions by using the PiroEW2010 AMC model and configuring it to select the MCS considering a target BER of 0.03 (instead of the default value of 0.00005). We note that these values do not reflect the actual BER, since they come from an analytical bound which does not consider all the transmission chain aspects; therefore the BER and BLER actually experienced at the reception of a TB is in general different.
The parameters of the nine test cases are reported in the following:
- 4 UEs placed 1800 meters far from the eNB, which implies the use of MCS 2 (SINR of -5.51 dB) and a TB of 256 bits, that in turns produce a BLER of 0.33 (see point A in figure BLER for tests 1, 2, 3.).
- 2 UEs placed 1800 meters far from the eNB, which implies the use of MCS 2 (SINR of -5.51 dB) and a TB of 528 bits, that in turns produce a BLER of 0.11 (see point B in figure BLER for tests 1, 2, 3.).
- 1 UE placed 1800 meters far from the eNB, which implies the use of MCS 2 (SINR of -5.51 dB) and a TB of 1088 bits, that in turns produce a BLER of 0.02 (see point C in figure BLER for tests 1, 2, 3.).
- 1 UE placed 600 meters far from the eNB, which implies the use of MCS 12 (SINR of 4.43 dB) and a TB of 4800 bits, that in turns produce a BLER of 0.3 (see point D in figure BLER for tests 4, 5.).
- 3 UEs placed 600 meters far from the eNB, which implies the use of MCS 12 (SINR of 4.43 dB) and a TB of 1632 bits, that in turns produce a BLER of 0.55 (see point E in figure BLER for tests 4, 5.).
- 1 UE placed 470 meters far from the eNB, which implies the use of MCS 16 (SINR of 8.48 dB) and a TB of 7272 bits (segmented in 2 CBs of 3648 and 3584 bits), that in turns produce a BLER of 0.14, since each CB has CBLER equal to 0.075 (see point F in figure BLER for test 6.).
The test verifies that in each case the expected number of packets received correct corresponds to a Bernoulli distribution with a confidence interval of 99%, where the probability of success in each trail is
and
is the total number of packet sent.
The error model of PCFICH-PDDCH channels consists of 4 test cases with a single UE and several eNBs, where the UE is connected to only one eNB in order to have the remaining acting as interfering ones. The errors on data are disabled in order to verify only the ones due to erroneous decodification of PCFICH-PDCCH. The test verifies that the error on the data received respects the decodification error probability of the PCFICH-PDCCH with a tolerance of 0.1 due to the errors that might be produced in quantizing the MI and the error curve. As before, the system has been forced on working in a less conservative fashion in the AMC module for appreciating the results in border situations. The parameters of the 4 tests cases are reported in the following:
- 2 eNBs placed 1078 meters far from the UE, which implies a SINR of -2.00 dB and a TB of 217 bits, that in turns produce a BLER of 0.007.
- 3 eNBs placed 1078 meters far from the UE, which implies a SINR of -4.00 dB and a TB of 217 bits, that in turns produce a BLER of 0.045.
- 4 eNBs placed 1078 meters far from the UE, which implies a SINR of -6.00 dB and a TB of 133 bits, that in turns produce a BLER of 0.206.
- 5 eNBs placed 1078 meters far from the UE, which implies a SINR of -7.00 dB and a TB of 81 bits, that in turns produce a BLER of 0.343.
HARQ Model
The test suite lte-harq includes two tests for evaluating the HARQ model and the related extension in the error model. The test consists on checking whether the amount of bytes received during the simulation corresponds to the expected ones according to the values of transport block and the HARQ dynamics. In detail, the test checks whether the throughput obtained after one HARQ retransmission is the expeted one. For evaluating the expected throughput the expected TB delivering time has been evaluated according to the following formula:
where
is the probability of receiving with success the HARQ block at the attempt
(i.e., the RV with 3GPP naming). According to the scenarios, in the test we always have
equal to 0.0, while
varies in the two tests, in detail:
The expected throughput is calculted by counting the number of transmission slots available during the simulation (e.g., the number of TTIs) and the size of the TB in the simulation, in detail:
where
is the total number of TTIs in 1 second.
The test is performed both for Round Robin scheduler. The test passes if the measured throughput matches with the reference throughput within a relative tolerance of 0.1. This tolerance is needed to account for the transient behavior at the beginning of the simulation and the on-fly blocks at the end of the simulation.
MIMO Model
The test suite lte-mimo aims at verifying both the effect of the gain considered for each Transmission Mode on the system performance and the Transmission Mode switching through the scheduler interface. The test consists on checking whether the amount of bytes received during a certain window of time (0.1 seconds in our case) corresponds to the expected ones according to the values of transport block
size reported in table 7.1.7.2.1-1 of [TS36213], similarly to what done for the tests of the schedulers.
The test is performed both for Round Robin and Proportional Fair schedulers. The test passes if the measured throughput matches with the reference throughput within a relative tolerance of 0.1. This tolerance is needed to account for the
transient behavior at the beginning of the simulation and the transition phase between the Transmission Modes.
Antenna Model integration
The test suite lte-antenna checks that the AntennaModel integrated
with the LTE model works correctly. This test suite recreates a
simulation scenario with one eNB node at coordinates (0,0,0) and one
UE node at coordinates (x,y,0). The eNB node is configured with an
CosineAntennaModel having given orientation and beamwidth. The UE
instead uses the default IsotropicAntennaModel. The test
checks that the received power both in uplink and downlink account for
the correct value of the antenna gain, which is determined
offline; this is implemented by comparing the uplink and downlink SINR
and checking that both match with the reference value up to a
tolerance of
which accounts for numerical errors.
Different test cases are provided by varying the x and y coordinates
of the UE, and the beamwidth and the orientation of the antenna of
the eNB.
RLC
Two test suites lte-rlc-um-transmitter and
lte-rlc-am-transmitter check that the UM RLC and the AM RLC
implementation work correctly. Both these suites work by testing RLC
instances connected to special test entities that play the role of the
MAC and of the PDCP, implementing respectively the LteMacSapProvider
and LteRlcSapUser interfaces. Different test cases (i.e., input test
vector consisting of series of primitive calls by the MAC and the
PDCP) are provided that check the behavior in the following cases:
- one SDU, one PDU: the MAC notifies a TX opportunity causes the creation of a PDU which exactly
contains a whole SDU
- segmentation: the MAC notifies multiple TX opportunities that are smaller than the SDU
size stored in the transmission buffer, which is then to be fragmented and hence
multiple PDUs are generated;
- concatenation: the MAC notifies a TX opportunity that is bigger than the SDU, hence
multiple SDUs are concatenated in the same PDU
- buffer status report: a series of new SDUs notifications by the
PDCP is inteleaved with a series of TX opportunity notification in
order to verify that the buffer status report procedure is
correct.
In all these cases, an output test vector is determine manually from
knowledge of the input test vector and knowledge of the expected
behavior. These test vector are specialized for UM RLC and
AM RLC due to their different behavior. Each test case passes if the
sequence of primitives triggered by the RLC instance being tested is
exacly equal to the output test vector. In particular, for each PDU
transmitted by the RLC instance, both the size and the content of the
PDU are verified to check for an exact match with the test vector.
RRC
The test suite lte-rrc tests the correct functionality of the following aspects:
- MAC Random Access
- RRC System Information Acquisition
- RRC Connection Establishment
- RRC Reconfiguration
The test suite considers a type of scenario with a single eNB and multiple UEs that are instructed to connect to the eNB. Each test case implement an instance of this scenario with specific values of the following parameters:
- number of UEs
- number of Data Radio Bearers to be activated for each UE
- time
at which the first UE is instructed to start connecting to the eNB
- time interval
between the start of connection of UE
and UE
; the time at which user
connects is thus determined as
sdf
- a boolean flag indicating whether the ideal or the real RRC protocol model is used
Each test cases passes if a number of test conditions are positively evaluated for each UE after a delay
from the time it started connecting to the eNB. The delay
is determined as
where:
is the max delay necessary for the acquisition of System Information. We set it to 90ms accounting for 10ms for the MIB acquisition and 80ms for the subsequent SIB2 acquisition
is the delay for the MAC Random Access (RA)
procedure. This depends on preamble collisions as well as on the
availability of resources for the UL grant allocation. The total amount of
necessary RA attempts depends on preamble collisions and failures
to allocate the UL grant because of lack of resources. The number
of collisions depends on the number of UEs that try to access
simultaneously; we estimated that for a
RA success
probability, 5 attempts are sufficient for up to 20 UEs, and 10
attempts for up to 50 UEs. For the UL
grant, considered the system bandwidth and the
default MCS used for the UL grant (MCS 0), at most 4 UL grants can
be assigned in a TTI; so for
UEs trying to
do RA simultaneously the max number of attempts due to the UL grant
issue is
. The time for
a RA attempt is determined by 3ms + the value of
LteEnbMac::RaResponseWindowSize, which defaults to 3ms, plus 1ms
for the scheduling of the new transmission.
is the delay required for the transmission of RRC CONNECTION
SETUP + RRC CONNECTION SETUP COMPLETED. We consider a round trip
delay of 10ms plus
considering that 2
RRC packets have to be transmitted and that at most 4 such packets
can be transmitted per TTI.
is the delay required for eventually needed RRC
CONNECTION RECONFIGURATION transactions. The number of transactions needed is
1 for each bearer activation. Similarly to what done for
, for each transaction we consider a round trip
delay of 10ms plus
.
delay of 20ms.
The conditions that are evaluated for a test case to pass are, for
each UE:
- the eNB has the context of the UE (identified by the RNTI value
retrieved from the UE RRC)
- the RRC state of the UE at the eNB is CONNECTED_NORMALLY
- the RRC state at the UE is CONNECTED_NORMALLY
- the UE is configured with the CellId, DlBandwidth, UlBandwidth,
DlEarfcn and UlEarfcn of the eNB
- the IMSI of the UE stored at the eNB is correct
- the number of active Data Radio Bearers is the expected one, both
at the eNB and at the UE
- for each Data Radio Bearer, the following identifiers match between
the UE and the eNB: EPS bearer id, DRB id, LCID
GTP-U protocol
The unit test suite epc-gtpu checks that the encoding and decoding of the GTP-U
header is done correctly. The test fills in a header with a set of
known values, adds the header to a packet, and then removes the header
from the packet. The test fails if, upon removing, any of the fields
in the GTP-U header is not decoded correctly. This is detected by
comparing the decoded value from the known value.
S1-U interface
Two test suites (epc-s1u-uplink and epc-s1u-downlink) make
sure that the S1-U interface implementation works correctly in
isolation. This is achieved by creating a set of simulation scenarios
where the EPC model alone is used, without the LTE model (i.e.,
without the LTE radio protocol stack, which is replaced by simple CSMA
devices). This checks that the
interoperation between multiple EpcEnbApplication instances in
multiple eNBs and the EpcSgwPgwApplication instance in the SGW/PGW
node works correctly in a variety of scenarios, with varying numbers
of end users (nodes with a CSMA device installed), eNBs, and different
traffic patterns (packet sizes and number of total packets).
Each test case works by injecting the chosen traffic pattern in the
network (at the considered UE or at the remote host for in the uplink or the
downlink test suite respectively) and checking that at the receiver
(the remote host or each considered UE, respectively) that exactly the same
traffic patterns is received. If any mismatch in the transmitted and
received traffic pattern is detected for any UE, the test fails.
TFT classifier
The test suite epc-tft-classifier checks in isolation that the
behavior of the EpcTftClassifier class is correct. This is performed
by creating different classifier instances where different TFT
instances are activated, and testing for each classifier that an
heterogeneous set of packets (including IP and TCP/UDP headers) is
classified correctly. Several test cases are provided that check the
different matching aspects of a TFT (e.g. local/remote IP address, local/remote port) both for uplink and
downlink traffic. Each test case corresponds to a specific packet and
a specific classifier instance with a given set of TFTs. The test case
passes if the bearer identifier returned by the classifier exactly
matches with the one that is expected for the considered packet.
End-to-end LTE-EPC data plane functionality
The test suite lte-epc-e2e-data ensures the correct end-to-end
functionality of the LTE-EPC data plane. For each test case in this
suite, a complete LTE-EPC simulation
scenario is created with the following characteristics:
- a given number of eNBs
- for each eNB, a given number of UEs
- for each UE, a given number of active EPS bearers
- for each active EPS bearer, a given traffic pattern (number of UDP
packets to be transmitted and packet size)
Each test is executed by transmitting the given traffic pattern both
in the uplink and in the downlink, at subsequent time intervals. The
test passes if all the following conditions are satisfied:
- for each active EPS bearer, the transmitted and received traffic
pattern (respectively at the UE and the remote host for uplink,
and vice versa for downlink) is exactly the same
- for each active EPS bearer and each direction (uplink or downlink),
exactly the expected number of packet flows over the corresponding
RadioBearer instance
X2 handover
The test suite lte-x2-handover checks the correct functionality of the X2 handover procedure. The scenario being tested is a topology with two eNBs connected by an X2 interface. Each test case is a particular instance of this scenario defined by the following parameters:
- the number of UEs that are initially attached to the first eNB
- the number of EPS bearers activated for each UE
- a list of handover events to be triggered, where each event is defined by:
+ the start time of the handover trigger
+ the index of the UE doing the handover
+ the index of the source eNB
+ the index of the target eNB
- a boolean flag indicating whether the target eNB admits the handover or not
- a boolean flag indicating whether the ideal RRC protocol is to be used instead of the real RRC protocol
- the type of scheduler to be used (RR or PF)
Each test case passes if the following conditions are true:
- at time 0.06s, the test CheckConnected verifies that each UE is connected to the first eNB
- for each event in the handover list:
- at the indicated event start time, the indicated UE is connected to the indicated source eNB
- 0.1s after the start time, the indicated UE is connected to the indicated target eNB
- 0.6s after the start time, for each active EPS bearer, the uplink and downlink sink applications of the indicated UE have achieved a number of bytes which is at least half the number of bytes transmitted by the corresponding source applications
The condition “UE is connected to eNB” is evaluated positively if and only if all the following conditions are met:
- the eNB has the context of the UE (identified by the RNTI value
retrieved from the UE RRC)
- the RRC state of the UE at the eNB is CONNECTED_NORMALLY
- the RRC state at the UE is CONNECTED_NORMALLY
- the UE is configured with the CellId, DlBandwidth, UlBandwidth,
DlEarfcn and UlEarfcn of the eNB
- the IMSI of the UE stored at the eNB is correct
- the number of active Data Radio Bearers is the expected one, both
at the eNB and at the UE
- for each Data Radio Bearer, the following identifiers match between
the UE and the eNB: EPS bearer id, DRB id, LCID
Automatic X2 handover
The test suite lte-x2-handover-measures checks the correct functionality of the handover
algorithm. The scenario being tested is a topology with two, three or four eNBs connected by
an X2 interface. The eNBs are located in a straight line in the X-axes. A UE moves along the
X-axes going from the neighbourhood of one eNB to the next eNB. Each test case is a particular
instance of this scenario defined by the following parameters:
- the number of eNBs in the X-axes
- the number of EPS bearers activated for the UE
- a list of check point events to be triggered, where each event is defined by:
+ the time of the first check point event
+ the time of the last check point event
+ interval time between two check point events
+ the index of the UE doing the handover
+ the index of the eNB where the UE must be connected
- a boolean flag indicating whether the ideal RRC protocol is to be used instead of the
real RRC protocol
- the type of scheduler to be used (RR or PF)
Each test case passes if the following conditions are true:
- at time 0.08s, the test CheckConnected verifies that each UE is connected to the first eNB
- for each event in the check point list:
- at the indicated check point time, the indicated UE is connected to the indicated eNB
- 0.5s after the check point, for each active EPS bearer, the uplink and downlink sink
applications of the UE have achieved a number of bytes which is at least half the number
of bytes transmitted by the corresponding source applications
The condition “UE is connected to eNB” is evaluated positively if and only if all the following conditions are met:
- the eNB has the context of the UE (identified by the RNTI value
retrieved from the UE RRC)
- the RRC state of the UE at the eNB is CONNECTED_NORMALLY
- the RRC state at the UE is CONNECTED_NORMALLY
- the UE is configured with the CellId, DlBandwidth, UlBandwidth,
DlEarfcn and UlEarfcn of the eNB
- the IMSI of the UE stored at the eNB is correct
- the number of active Data Radio Bearers is the expected one, both
at the eNB and at the UE
- for each Data Radio Bearer, the following identifiers match between
the UE and the eNB: EPS bearer id, DRB id, LCID
Handover delays
Handover procedure consists of several message exchanges between UE, source
eNodeB, and target eNodeB over both RRC protocol and X2 interface. Test suite
lte-handover-delay verifies that this procedure consistently spends the
same amount of time.
The test suite will run several handover test cases. Eact test case is an
individual simulation featuring a handover at a specified time in simulation.
For example, the handover in the first test case is invoked at time +0.100s,
while in the second test case it is at +0.101s. There are 10 test cases, each
testing a different subframe in LTE. Thus the last test case has the handover
at +0.109s.
The simulation scenario in the test cases is as follow:
- EPC is enabled
- 2 eNodeBs with circular (isotropic) antenna, separated by 1000 meters
- 1 static UE positioned exactly in the center between the eNodeBs
- no application installed
- no channel fading
- default path loss model (Friis)
- 0.300s simulation duration
The test case runs as follow. At the beginning of the simulation, the UE is
attached to the first eNodeB. Then at the time specified by the test case input
argument, a handover request will be explicitly issued to the second eNodeB.
The test case will then record the starting time, wait until the handover is
completed, and then record the completion time. If the difference between the
completion time and starting time is less than a predefined threshold, then the
test passes.
A typical handover in the current ns-3 implementation takes 4.2141 ms when using
Ideal RRC protocol model, and 19.9283 ms when using Real RRC protocol model.
Accordingly, the test cases use 5 ms and 20 ms as the maximum threshold values.
The test suite runs 10 test cases with Ideal RRC protocol model and 10 test
cases with Real RRC protocol model. More information regarding these models can
be found in Section RRC protocol models.
The motivation behind using subframes as the main test parameters is the fact
that subframe index is one of the factors for calculating RA-RNTI, which is used
by Random Access during the handover procedure. The test cases verify this
computation, utilizing the fact that the handover will be delayed when this
computation is broken. In the default simulation configuration, the handover
delay observed because of a broken RA-RNTI computation is typically 6 ms.