Dr. Senad Bulja, PhD, FIET, SMIEEE https://drbulja.com Wed, 18 Sep 2024 08:31:09 +0000 en-US hourly 1 Characterisation and application of nematic liquid crystals in microwave devices https://drbulja.com/characterisation-and-application-of-nematic-liquid-crystals-in-microwave-devices-1643/ https://drbulja.com/characterisation-and-application-of-nematic-liquid-crystals-in-microwave-devices-1643/#respond Mon, 16 Sep 2024 03:59:17 +0000 https://drbulja.com/?p=1643 The abundance of a widely available spectrum at a frequency band of around 60 GHz (mm-wave region) displays potential to support high data rate, short range wireless communications. This has led to an increased demand for cost effective solutions for the RF front end, such as antennas, phase shifters and filters. Preferably, these mm-wave devices need to be reconfigurable and compact.

Liquid Crystals (LCs) have become attractive substrates for microwave devices. They possess a significant tuneable dielectric constant in the mm-waveband, which can be exploited in compact and reconfigurable devices such as phase shifters and antennas. When designing such devices two main problems are normally encountered. Firstly, the dielectric properties of few LCs have been fully characterised in this waveband. Secondly, design tools fail to account fully for the spatial dependence of the liquid crystal orientation and its effect on the electromagnetic fields. We address the problem of characterisation using a microstrip line fabricated with a layer of liquid crystal as its substrate. Standard microwave substrates are employed resulting in a practical and cost-effective characterisation device. A network analyser is used to measure the scattering parameters prior to and after filling with liquid crystal. Accurate models of the director and microwave fields are then used to set up an inverse problem that allows for the recovery of a number of liquid crystal material properties, including permittivities, loss tangents and elastic constants. Results of the characterisation are presented for a number of liquid crystalline materials.

Fig. 1 Perspective view of the structure of measurement LC cell
Fig. 1 Perspective view of the structure of measurement LC cell
]]>
https://drbulja.com/characterisation-and-application-of-nematic-liquid-crystals-in-microwave-devices-1643/feed/ 0
In-Vessel Resonant Communications https://drbulja.com/in-vessel-resonant-communications-1623/ https://drbulja.com/in-vessel-resonant-communications-1623/#respond Mon, 05 Aug 2024 02:24:41 +0000 https://drbulja.com/?p=1623 This article builds upon our previous conference article on in-vessel communications [1], which examined the feasibility of performing communications inside enclosed volumes at their eigenmode frequencies. Such an approach is of extreme importance, especially for the cases when the enclosed volume contains lossy and high relative dielectric constant media.

In the present article, we quantify the relationship among the dielectric characteristics of the media (relative dielectric constants and their losses) inside the enclosed volume, antenna sizes and their positions and their influence on overall communications losses. For the purpose of the experiment, a cylindrical metal vessel (barrel) with a height of H = 80 cm and radius, R = 30 cm is used, Fig. 1 (a). The resonator formed in this way is excited using monopole antennas/sensors (Tx and Rx antennas), Fig. 1 (a), and the barrel is filled with a high relative dielectric and high loss loss dielectric (tap water).

The main findings of the article are:

  1. Losses of the medium are detrimental to the overall transmission loss; however, they also result in the reduction of optimal frequency of operation, inferring that smaller probes can be used to excite such a cavity, Fig. 1 (b).
  2. Overall transmission losses decrease as the size of the excitation antennas are increased, Fig. 1 (c) however, that occurs only up to a certain frequency. Increasing antenna size beyond this frequency is detrimental to communications. For resonant communications, probe size should be kept at a minimum.
  3. Positions of the transmitting and receiving probes are of utmost importance, since their position may or may not coincide with the location of electric field maxima and, hence, low losses.

 

Fig. 1 Cylindrical resonant cavity with two sensors placed inside it (a); simulated transmission coefficient for the case when the cavity is filled with high dielectric constant material with and without losses (b); measured transmission coefficient as a function of probe (c)
Fig. 1 Cylindrical resonant cavity with two sensors placed inside it (a); simulated transmission coefficient for the case when the cavity is filled with high dielectric constant material with and without losses (b); measured transmission coefficient as a function of probe (c)

Points 1-3 above indicate that in static systems, i.e. systems when the locations of the transmitting and receiving

probes are predefined, it is always possible to find the optimum frequency of operation, considering probe size, media losses and size constraints. However, in dynamic systems, or systems where the transmitting and receiving probes are moving, optimal frequency of operation will be highly dependent on the exact location of the probes. In this case, the frequency of operation should be an adjustable parameter and its exact value can be found by performing a scan in a predefined frequency range, from which the frequency exhibiting lowest losses are selected.

References:

[1] V. Kirillov, D. Kozlov, H. Claussen and S. Bulja, “Performance Estimation of In-Vessel Resonant Communications”, 18th European Conference on Antennas and Propagation, (EuCAP), 2024, United Kingdom.

 

]]>
https://drbulja.com/in-vessel-resonant-communications-1623/feed/ 0
Performance Estimation of In-Vessel Resonant Communications https://drbulja.com/performance-estimation-of-in-vessel-resonant-communications-1618/ https://drbulja.com/performance-estimation-of-in-vessel-resonant-communications-1618/#respond Mon, 10 Jun 2024 01:03:05 +0000 https://drbulja.com/?p=1618 Measurements of pertinent parameters of liquids such as temperature, density or viscosity within large, enclosed vessels, such as barrels, cisterns or tanks is an important practical task to control technological process or storage conditions. This type of measurements requires the establishment of reliable wireless communication between multiple sensors, preferably, but not necessarily uniformly distributed within the enclosed vessel. However, this is a challenging task since the applicability of the existing traditional communication methods is performance limited in the scenario of enclosed vessels filled with high-loss liquids.

As is known, optical communication links are reliable under line-of-sight conditions, however, they are adversely affected by the opacity and turbidity of liquids. Acoustic communications is a well-established approach for such scenarios but it is hampered by environmental factors such as temperature, pressure, and influence of external interference. Radio Frequency (RF) communications can be an attractive solution to overcome the above-mentioned limitations of optical and acoustic in-vessel communications. A standard RF link is established by the interaction of transmitting and receiving antennas, which are traditionally equal to half or a quarter of the wavelength. This means that the operational frequency needs to be relatively high (GHz range) to be able to use small antennas in the limited space of the vessel. However, at that frequency range, losses related to the propagation of electromagnetic (EM) waves through liquids are too high to establish a reliable communication link. Thus, a new approach for in-vessel communications is required to overcome these challenges.

Here we propose an alternative approach, which considers an enclosed volume as a low-frequency resonator with communication performed at its resonant frequencies. To this end, it is well known that any resonator has an infinite number of eigenmodes, which are characterized by their own eigenfrequency and a predefined EM field distribution. In this case, efficient data transmission between transmitting and receiving antennas is possible at eigenfrequencies. Due to the multiple reflections of EM waves from cavity walls, EM energy remains inside the enclosed volume, leading to reduced transmission loss between two antennas located inside the cavity in comparison with free-space propagation.

Fig.1 Cylindrical resonant cavity resonator filled with water (a); electric field distribution corresponding to first eigenmode (b), second eigenmode (c) and high order eigenmode (d).

As a demonstration of the proposed principle, a cylindrical vessel, which has a height H and a radius H/2, as shown in Fig. 1 is used to demonstrate the practical feasibility of this approach. Simulation and experimental results are presented together with the analysis of the efficiency of the excitation of cavity resonators. Further, the influence of antenna sizes and dielectric properties of liquids on the transmission characteristics between two antennas is investigated. It is demonstrated that the determination of optimal antenna size providing communication links is one of the most important practical tasks for the development of an in-vessel communication system.

]]>
https://drbulja.com/performance-estimation-of-in-vessel-resonant-communications-1618/feed/ 0
Reconfigurable RF materials and devices for 6G and novel architectures https://drbulja.com/reconfigurable-rf-materials-and-devices-for-6g-and-novel-architectures-1615/ https://drbulja.com/reconfigurable-rf-materials-and-devices-for-6g-and-novel-architectures-1615/#respond Sun, 24 Mar 2024 23:27:28 +0000 https://drbulja.com/?p=1615 The last decade has witnessed significant investment in the research, development and deployment of 5G networks and systems throughout the world. The deployment of 5G systems is currently ongoing and it is rightfully expected that the addition of new communication services and capabilities will inevitably be reflected in the way we operate as society. The changes brought about by the advances in 5G are expected to play a transformative societal role, with demands placed upon the technology eventually and ultimately becoming greater than the technology can deliver. The interplay between the evolving societal needs and the push from the advancing technological tools are expected to play a major role in the definition of the new generation of communications – 6G [1].

Major technological advances in the areas of hardware, cloud, open source, continuous evolution of DevOps, AI and Internet evolution are expected to play a pivotal role in the rise of new services and capabilities the future networks will need to provide. Even though the exact specifications to be imposed on the new generation of networks are difficult to predict with certainty, the evolution path of 4G towards 5G provides a hint of possible directions. It will, therefore, not come as a surprise that future networks will be expected to cater for an exponentially increasing traffic, stemming from a variety of scenarios, with examples covering healthcare, cyber, AI and immersive communication platforms, to name but a few. These platforms will not only necessitate extremely high data rates, network resilience and adaptability, but they will also be expected to provide a sub-millisecond latency in the end-to-end scenario and be able to avail of flexible spectrum usage [2].

Flexible spectrum usage refers to the ability of communications devices to adapt its RF front end in accordance with the needs and requirements of the communications system. Here, adaptation refers to the capability of communications hardware (RF and mm-wave) to adapt itself in terms of the operating frequency, bandwidth and radiation characteristics, so as to support the new applications and services of the forthcoming 6G standards. It is believed that the lower frequency bands (below 6 GHz) will remain of importance to 6G, as much as they are to current 4G/5G technologies, however, it is also expected that the mm-wave spectra (24 GHz-52 GHz) already used by 5G will also be extended to 100 GHz so that it could be used for 6G. There are expectations that the spectrum above 100 GHz [3] will also need to be utilised, however, major technological advances are necessary to address the problem of challenging propagation environments.

To address the above-mentioned future challenge related to hardware for 6G, major technological advances on the level of new materials, RF and mm-wave devices and Transceiver (TRX) architectures are needed. The hardware research challenge in 6G is therefore three-fold:

  1. New RF and mm-wave materials enabling tunability and re-configurability.
  2. New RF and mm-wave devices capable of exploiting the capabilities of new materials.
  3. New TRX architectures capable of exploiting the capabilities of new materials and new devices.

As indicated earlier, new services and new applications will require a great deal of flexibility, to the extent that such flexibility is impossible to cater for using the existing reconfigurable technologies. In other words, solving the problems of the future, requires the tools of the future. Such new materials are expected to act as the fuel for the development of new devices, with new capabilities, and will be able to support new transceiver architectures needed for flexible spectrum usage.

State-of-the-art & Limitations

Current reconfigurable RF and mm-wave hardware is still in infancy. Even though re-configurability has been present in RF devices and components for at least two decades, these devices have traditionally experienced high levels of insertion losses and low switching speeds and, have almost always been an afterthought [4-7]. These early tuneable devices usually included the addition of a semiconductor device (diode or transistor) to a standard, passive RF device (filter, antenna, phase shifter) in order to gain a degree of controllability, to a limited extent. There have been, however, several efforts to integrate externally controllable bulk tuneable materials, such as Liquid Crystals (LCs) [8-10], Ferro-Electrics (FE) [11-13] and, most recently, Electro-Chromic (EC) materials [14-17] with RF and mm-wave circuits, such as filters, antennas and phase shifters. However, as mentioned earlier, such approaches have not yielded a great deal of commercial traction, since the electrical and size requirements imposed on commercially available RF and mm-wave hardware are very strict and, simply cannot be met using the reconfigurable technologies available today. For example, the loss tangents of LC mixtures still tend to be quite large – of the order of 0.03 [9], which is considered too large for many ap-plications, while their response time usually of the order of a few ms. Ferro-electrics, even though offering larger tuneable ranges than LCs exhibit very high dielectric constants which are of limited use for RF and mm-wave applications. The research on EC materials is, however, still in infancy, however it appears to offer a compromise between the LCs and Fes, with an added benefit of exhibiting a strong memory effect, [18-21]. The devices and architectures that these traditional bulk-tuneable materials can support, in the context of future networks, is, therefore, limited to, for example, phase shifters, attenuators, to name but a few, where the integration with semiconductor technologies is easier.

Detailed-transparent-view-of-the-reflectarray-and- fabricated-reflect-array
Fig.1. (Left) Detailed transparent view of the reflectarray (the ground plane is only shown by the slots) and (Right) fabricated reflect-array with over 13,000 antenna elements operating at 300 GHz.

Increased re-configurability is expected to benefit the upcoming 6G communications in a paradigm-shift way. An example of these are Intelligent Reflecting Surfaces (IRS), which have gained an increased degree of in-terest lately. IRS have recently attracted a great deal of attention, especially since they allow the attainment of some form of re-configurability, which is of great importance in the context of 5G and the upcoming 6G specifications. In particular, IRS have been used to mitigate the harmful effects of the wireless environment by virtue and ability to redirect the incoming signal towards a specific path. This ability is usually achieved by controlling some parameters of meta-atoms, such as the phases and amplitudes. The elements of IRS can be controlled through either a semi-conductor device, Micro-Electro-Mechanical Switch (MEMS) or liquid crystals [22], depending on the parameter of the meta-atoms that is being controlled. In turn, this allows the IRS to manipulate the incident wavefront to achieve beam-steering, adjustable absorption, polarisation, filtering and collimation [23]. However, the losses and latency times of the constituent materials limit their application range, [24].

In a similar vein, frequency tunability/re-configurability constituent materials is greatly expected to enable entire novel RF and mm-wave front ends. For example, traditional RF and mm-wave front ends, such as beam-former networks, still rely on semiconductor components to achieve beam steering, where filter banks are, still, passive and not tunable/reconfigurable. However, recently there has been a rise in the “re-invention” of some of the older antenna technologies, such the Luneburg and Rotman lenses [25-29]. The interest in such technologies has been fuelled by the availability of advanced manufacturing techniques (3D plastic and metal printing), but also by the fact that new communications frequencies are expected to be in the mm-wave region, which significantly reduces their physical size. Nevertheless, introducing re-configurability/tenability into these structures is, from the standpoint view of current technologies, a severe limitation.
Novel Approach

Multifunctional materials, i.e. materials that can adapt to the external environment will be the key for future devices. 2D materials hold a unique position in the design of such structures. The family of 2D crystals these days hold a number of different one-atom-thick materials: insulators, semiconductors, metals, thus giving rise to very complex van der Waals heterostructures where such 2D crystals can be combined together to form artificial materials with predetermined properties. Furthermore, combining such crystal with other materials, for instance polyelectrolytes, it is possible to make them responsive to the environment and to force them to adapt to external conditions.

In order to address the 6G communications challenge in an appropriate manner, it is imperative for RF practi-tioners to interactively engage with material scientists in order to create new functional RF and mm-wave materials that will fuel the development of agile RF and mm-wave hardware. Here, we would like to mention Transition Metal Oxides, Phase Change Materials and other bespoke 2D materials, capable of high dynamic ratios and high switching speeds that have the capacity to fuel the development of future hardware, such as:

  1. Reconfigurable filters. Filters are not only expected to be the direct beneficiaries of the advances in new materials but based on the high switching speeds of such new materials, a new paradigm shift in filtering architectures is expected. Examples are filter configurations obtained using high-speed switches, where the shape of the passband can be tailored by weighted sampling.
  2. Reconfigurable antennas. Antennas are an additional beneficiary of the advances in new materials. Here it is envisaged that such switches can be used in the circuit of meta-material lens antennas in order to create variable radiation patterns.
  3. Intelligent surfaces (IS). It was mentioned earlier that in the current state the insertion losses and la-tency of IRSs preclude their widespread use. For example, the best-in-class insertion losses of a state-of-the-art semiconductor switch is about 0.15 dB up to frequencies of 6 GHz. IRSs are, in gen-eral, composed of hundreds, if not thousands, of such switches, which, inevitably increases the total insertion losses. In this aspect a combination of the new materials/switches and innovative RF archi-tectures can be used to create low-loss IRS.
  4. THz communications. The interest in THz communications has been growing in recent years, driven, primarily, by the expectations that future communications will be able to take advantage of advanc-ing technological tools. This has been followed closely by the resurgent interest in some technologies developed in the 1950s, such as the Resonant Tunnelling Diode (RTD) [30 – 34]. Since the RTD, in addi-tion to exhibiting very high switching speeds and having a region of negative dynamic resistance, makes the prospect of research and development of novel transceiver architectures capable of ex-ploiting this phenomenon very attractive. This has the potential to give rise to new transceiver archi-tectures, such as novel THz beamforming technologies, where the negative dynamic resistance can be employed to create radiating structures and how such structures could be combined in a variety of topologies, such as a beam-forming network, or smart THz surface.

The creation of reconfigurable/ “smart” RF materials will increase the need for “smart” control of RF and mm-wave hardware. Here, Artificial Intelligence (AI) will be expected to play a crucial role.

References

[1] https://www.ericsson.com/en/reports-and-papers/white-papers/a-research-outlook-towards-6g
[2] https://www.controleng.com/articles/important-technological-developments-to-watch-for-6g/#:~:text=In%206G%2C%20the%20frequency%20ranges,(0.3%20to%2010%20THz)
[3] https://www.ericsson.com/en/reports-and-papers/ericsson-technology-review/articles/the-future-of-cloud-computing
[4] A. Malczewski et al., ‘X-band RF MEMS phase shifters for phased array applications’, IEEE Microwave and Guided Letters, vol. 9, no. 12, pp. 517-519, December 1999.
[5] H. T. Kim et al., ‘A compact V-band 2-bit reflection-type MEMS phase shifter’, IEEE Microwave and Wireless Components Letters, vol. 12, no. 9, pp. 324-326, September 2002.
[6] C. L. Chen et al., ‘A low loss Ku-band monolithic analog phase shifter’, IEEE Trans. Microwave Theory Tech., vol. MTT-35, no.3., pp. 315-320, March 1987.
[7] D. M. Krafcsik et al., ‘A dual varactor analog phase shifter operating at 6– 18 GHz’, IEEE Trans. Microwave Theory and Tech., vol. 36, no.12, pp.1938-1941, December 1988.
[8] R. James, F. A. Fernandez, S. E. Day, S. Bulja and D. Mirshekar-Syahkal, “Accurate modelling for wideband characterisation of nematic liquid crystals for microwave applications”, in IEEE Microwave Theory and Techniques, pp. 3293-3297, vol. 57, issue 12, 2009.
[9] S. Bulja, D. Mirshekar-Syahkal, R. James, S. E. Day and F. A. Fernandez, “Measurement of dielectric properties of nematic liquid crystals at millimetre wavelength”, in IEEE Microwave Theory and Techniques, pp. 3493-3501, vol. 58, issue 12, 2010.
[10] M. Yazdanpanahi, S. Bulja, D. Mirshekar-Syahkal, R. James, S. E. Day and F. A. Fernandez, “Measurement of dielectric constants of nematic liquid crystals at mm-wave frequencies using patch resonator”, in IEEE Transactions on Instrumentation and Measurement, pp. 3079-3085, vol. 59, issue 12, 2010.
[11] R. R. Romanofsky, “Advances in scanning reflectarray antennas based on ferroelectric thin-film phase shifters for deep space communications”, in Proceedings of the IEEE, pp. 1968-1975, vol. 95, issue 10, 2007.
[12] M. Haghzadeh, C. Armiento and A. Akyurtlu, “All-printed flexible microwave varactors and phase shifters based on a tunable BST/Polymer”, in IEEE Microwave Theory and Techniques, pp. 2030-2042, vol. 65, issue 6, 2017.

]]>
https://drbulja.com/reconfigurable-rf-materials-and-devices-for-6g-and-novel-architectures-1615/feed/ 0
6G – how do we get there? https://drbulja.com/6g-how-do-we-get-there-1392/ https://drbulja.com/6g-how-do-we-get-there-1392/#respond Fri, 03 Nov 2023 08:10:41 +0000 https://drbulja.com/?p=1392 There has been a great deal of talk about 6G, but it is not very clear what exactly 6G is supposed to be. Simply, there are no standards, as of now, that clearly spell out the benefits that 6G communications standards will bring and what gaps in the current 5G communication standards the new 6G communication standard will address. That is not to say that academic literature is scant of ideas as to what 6G should be like, however, most of those fail to paint a bigger picture regarding what exactly 6G is going to be and why do we need it?  Some good hints could be found in a variety of white papers, such as those from Ericsson [1], which clearly spell out that 6G will not only be driven by a technological push, but also the societal pull. Let us go briefly through the generational journey of how we ended up where we are to shed some light on what 6G may actually look like. We will start with the 4G communications standard, as going further in the past will not bring useful insights.

 

The 4G communications standard offered a great deal of improvements compared to the previous generation (3G) communication standard, namely speed, latency and capacity. In terms of speed 4G networks could deliver download speeds up to 150 Mb/s and upload speeds of 50 Mb/s, which was a marked improvement compared to the 3G equivalents – 7.2 Mb/s and 2 Mb/s, respectively [2]. Latency differences were also much more favorable for the 4G standards, averaging between 50 ms and 80 ms, compared to the 3G networks where the average stood between 60 ms and 340 ms [3]. However, from the technical point of view, the most marked difference was the use of advanced antenna techniques – such as beamforming and Massive Input Massive Output (MIMO). In this aspect, Alcatel-lucent introduced the concept of lightradio [4], a beamforming technique, which relied on the use of individual “cubes”, with each cube containing a standalone transceiver and antenna. The greater the number of “cubes”, the greater the capability to perform beamforming and hence, capacity of the system. In terms of operational frequencies – both the 3G and 4G standards operate at frequencies below 3 GHz, which ensured wide coverage at the expense of limited achievable data rates.

 

The need for greater speeds, lower latencies and capacities gave rise to 5G. As an example – 5G offers peak data dates up to 20 Gbps and latencies between 8 ms-12 ms, which is a tremendous improvement compared to 4G. However, it is important to distinguish between 2 flavors of 5G – Future Radio (FR)1 and Future Radio (FR) 2.  FR 1 is exemplified using frequencies below 6 GHz and, in terms of capacity and speeds, represents only a minor improvement compared to 4G. However, the strength of 5G comes to light in FR 2, which occupies frequencies from 24 GHz to 71 GHz, with the frequency spectrum between 24.25-29.5 GHz [5] being the most used 5G mm-wave spectrum range. 5G FR 2 (or mm-wave 5G) indeed offers impressive data rates, albeit only for a limited spatial range. This is primarily since free space attenuation at mm-wave frequencies, i.e., the frequencies of 5G FR2 is a magnitude higher compared to the losses at the frequencies of FR 1 (below 6 GHz). In addition to enabling unprecedented data rates compared to 4G, 5G is also poised to aid the proliferation of Internet of Things (IoT) devices, in particular at mm-wave frequencies where the size of the RF front end (filters and antennas) is much smaller compared to their size at frequencies below 6 GHz.

 

Given this brief overview of 4G and 5G communications standards, one may rightfully ask – why 6G? What are the benefits? What will 6G be capable of that 5G is not? Simply put, this is not fully known at this stage, but as the white paper from Ericsson states, it is expected to be a combination of technological push and societal pull. In other words, the drive towards 6G will be human-centric. The frequencies of 6G communications standards are not known at this stage, however, it is strongly believed that 6G frequencies will extend to over 100 GHz [1]. To be exact, the frequency range under consideration for 6G includes W-band (75 GHz-110 GHz), D-band (110 GHz – 175 GHz) and in some instances even higher frequency bands.

Fig. 1 13,000 antenna element reflect array operating at entre frequency of 300 GHz (diameter of less than 4 cm)
Fig. 1 13,000 antenna element reflect array operating at entre frequency of 300 GHz (diameter of less than 4 cm)

are mentioned (275 GHz – 300 GHz, and from 0.3 THz – 10 THz) [6]. Given the lessons that we have learnt from 5G FR 2, increasing the frequency of operation ultimately reduces the spatial range due to increase in the losses, infers that 6G networks are inherently poised to cater for low spatial ranges with extremely high data rates. This, in turn, suggests that networks of the future will be highly dense with a variety of micro, pico and femto cells overlapping each other to provide good coverage. This may work well for urban environments, but in rural environments it would impractical and cost ineffective. However, that is not the main problem.

One of the biggest technical problems that 6G communication networks face lies with the need to address backword compatibility, as such new networks will be expected to perform well in the context of 5G and possibly 4G, which, as shown earlier, operate at lower frequencies compared to prospective 6G. In other words, 6G networks, in addition to catering for extremely high data rates, being highly resilient and of low latency, will also need to be able to avail of flexible spectrum usage [7], inferring that they must be adaptable to receive and transmit signals of a variety of frequencies. As we have seen, such frequencies can range from below 6 GHz (5G FR1), through mm-wave frequencies (24 GHz – 71 GHz, 5G FR2) up to, possibly, 300 GHz. From the Radio Frequency (RF) hardware point of view, this is a tremendous, if not impossible, task to accomplish which can only be achieved using reconfigurable hardware – the “holy grail” for RF stalwarts. This is the problem. Reconfigurable technologies enabling hardware adaptability, even though, subject to academic and industrial research for over 3 decades have not yield much traction – they are either too lossy and slow such as the case of Liquid Crystals (LCs) [8], non-conducive to RF – FerroElectric (FE) [9] or difficult to integrate into RF circuits, such as semiconductors. Solving this problem or, at least, reducing the impact of this problem will be of tremendous importance to paving the way to 6G. At present, however, there is no obvious way how this will be achieved. However, if and when solved, it would allow a plethora of novel types of RF communication devices, such as Intelligent Surfaces to be fully coupled with the advances made with respect to Artificial Intelligence (AI) to reach a real-world impact that be of benefit to everyone. Until then, research continues.

References:

[1] https://www.ericsson.com/en/reports-and-papers/white-papers/a-research-outlook-towards-6g

[2] https://en.wikipedia.org/wiki/4G

[3] https://www.researchgate.net/figure/Maximum-and-average-latency-in-4G-and-3G-networks-6_fig3_338598740

[4] https://money.cnn.com/2011/03/21/technology/light_radio/index.htm

[5] frequency spectrum between 24.25-29.5 GHz being the most used 5G mm-wave spectrum range

[6] https://www.controleng.com/articles/important-technological-developments-to-watch-for-6g/#:~:text=In%206G%2C%20the%20frequency%20ranges,(0.3%20to%2010%20THz)

[7] https://www.ericsson.com/en/reports-and-papers/ericsson-technology-review/articles/the-future-of-cloud-computing

[8] S. Bulja, D. Mirshekar-Syahkal, R. James, S. E. Day and F. A. Fernandez, “Measurement of dielectric properties of nematic liquid crystals at millimetre wavelength”, in IEEE Microwave Theory and Techniques, pp. 3493-3501, vol. 58, issue 12, 2010.

[9] M. Haghzadeh, C. Armiento and A. Akyurtlu, “All-printed flexible microwave varactors and phase shifters based on a tunable BST/Polymer”, in IEEE Microwave Theory and Techniques, pp. 2030-2042, vol. 65, issue 6, 2017.

 

]]>
https://drbulja.com/6g-how-do-we-get-there-1392/feed/ 0
Wireless power transfer https://drbulja.com/wireless-power-transfer-1387/ https://drbulja.com/wireless-power-transfer-1387/#respond Thu, 12 Oct 2023 09:18:45 +0000 https://drbulja.com/?p=1387 Introduction

Wireless power transfer has gained significant attention in recent years. It was primarily driven by consumer demand, such as wireless charging of mobile devices (phones, tablets) being prime examples. However, it is important to remember that the idea of wireless power transfer is old and dates to the 19th century. To be precise, magnetic resonance as a means for Wireless Power Transfer (WPT) was first demonstrated by Hertz in 1887 [1], which was followed by experimental work by Nikola Tesla spanning the 1890s and up to 1920s, [2,3,4]. A major milestone in the field of wireless power transmission came in 1963, when William C. Brown demonstrated a wirelessly powered helicopter. Even though the work on wireless power continues to this day, it is important to understand the types of wireless power transfer, their efficiencies and ultimately limitations.

In general, wireless power transfer comes in two basic flavours: near field and far field. The distinction is made with regards to the relative positions of the transmitting and receiving circuits.

 

Near field wireless power transfer, in general, occurs when the transmitting and receiving circuits are located up to one wavelength apart from each other, however, this definition is rather broad and of limited use in wireless charging. Here, it is important to note that strictly speaking, the near field region consists of 2 sub-regions: reactive near field and radiative near field. The reactive near field, characterized by rapid variations of the electric and magnetic fields, occurs up to the distance of  (λ0 /2π)    from the transmitting circuit and the radiative near field region occurs between (λ0 /2π)  and λ0. The reactive near field region is of high importance to modern wireless charging for 2 main reasons:

 

  1. The efficiency of wireless charging in the reactive near field region is a strong function of distance. The Electro-Magnetic (EM) fields in this region decay as a function of  ~r3, inferring that maximum power transfer occurs when the transmitting and receiving devices are in a close proximity of each other.
  2. The reactive near field is non-radiative, inferring that wireless power transfer in this case does not require an antenna of any kind. Instead, in most near field wireless power transfer technologies inductive coupling is used, which consists of inductive coils at the transmitting and receiving ends, Fig. 1. Being non radiative, this wireless power transmission technology infers that health risks due to magnetic field exposure are minimized. For better efficiency, a capacitor can be added in series with the inductor coils both at the transmitting and receiving ends, creating a resonant circuit. The circuit obtained in this way is best operated at the resonant frequency, given by: fr=1/(2π√(LiC)’) , (i = 1,2) as at this frequency point the reactive losses of the power transfer circuit are zero and the only losses in the primary and secondary coils come from the parasitic resistances in the circuit.

 

It is worth mentioning that reactive near field wireless power transmission can also be performed using electric fields, in which case capacitive plates are used – one at the transmitting end and the second at the receiving end, Fig. 2. This case can be considered the conjugate of inductive coupling as energy is transmitted by electric fields, rather than magnetic fields. By bringing the capacitive plates in proximity to each other an Alternating Current (AC) will begin to flow, thereby catering for power transmission. The closer the plates are, the more efficient power transfer is. In addition, increasing the frequency of operation reduces the reactance produced by the capacitive plates, inferring greater efficiency. Similar to the case of inductive near field wireless power transmission, near field capacitive charging benefits from the creation of a resonant circuit – in this case, however, instead of a capacitor, an inductor is added in series. However, this type of wireless power transfer is very rarely used, primarily since high voltages and, hence high electric fields, are needed for efficient power transmission, which can become a health hazard. This is since electric fields interact to a much greater degree with the human body compared to magnetic fields.

Fig. 1 Inductive wireless power transfer

Fig. 2 Capacitive wireless power transfer

Fig. 2 Capacitive wireless power transfer

Near field wireless power transfer, particularly in the inductive coupling, has found various commercial uses – examples span from electric toothbrushes, shavers and wireless charging pads for mobile phones, tablets. At present there are two main “camps” promoting their own versions of near field wireless charging, Wireless Power Consortium, [5], promoting the Qi standard [6] and the AirfuelAlliance [7], promoting AirFuel Resonant [8]. It is worth mentioning the the Qi standard is universal, open standard and operates, depending on the power level, at frequencies from 5 kHz to 300 kHz whereas AirFuel Resonant operating frequencies are expected to be 6.78 MHz [9]. The expected main advantage of AirFuel Resonant lies with the fact that it can charge several devices at the same time.

Regardless of the standards used, the efficiency of near-field wireless powering will always be lower than its wired competition. Efficiency in this case strongly depends on the separation between the coils, frequency of operation, alignment of transmitters and receivers and the thicknesses of coils and complexity of drive electronics. However, in the best-case scenarios, efficiencies up 80% appear to be achievable [10].

Far field wireless power transfer, in general, relies on the use of directive antenna, to “beam” the power to a desired user, where it will be subsequently, through a rectifier circuit, such as rectennas, converted to dc power. Their practical use has been demonstrated on numerous occasions, such as the experiment in Reunion [11], which used a 33 dBi gain antenna to wirelessly transmit power to 700 m, with an overall efficiency of over 57%.  The system operates at a frequency of 2.45 GHz.

The efficiency of far field wireless power transfer technique is given by the following equation:

infers that the total efficiency of the far field wireless power transfer depends on the antenna gains of the transmitting and receiving circuits, given Gt by  and Gr , respectively and, the term given by (λ/4πd)2 , which corresponds to the efficiency when the transmit and receive radiators are fully isotropic. (1) and (2) infer that total efficiency decays 2 times faster as wavelength (λ) is reduced, and separation (d) increased compared to the increase in the gains of transmitting (Gt) and receiving (Gr) antennas. Viewed in this way, efficiency of far-field wireless power transfer solutions will always decrease as the separation between transmitting and receiving antennas increase regardless of the antenna gains. The main question, however, is what is an acceptable efficiency that consumers will tolerate? The answer to this lies with consumer needs. As an example, it is noteworthy to mention that far-field wireless transfer technologies have already found their way to the market – example includes Energous [12], operating at 900 MHz and using 24 antennas, who claim that their technology operates both in the near field and far field. Another example is Ossia Cota [13,14], however, it appears that their technology, despite appearing to be so, is not based on far-field wireless power transfer – rather it uses a complex near field wireless approach utilizing hundreds of antennas operating at frequencies of 2.4 GHz and 5.8 GHz. However, no information of achievable efficiencies has been reported.

Conclusion

Wireless power transfer technologies have gained considerable interest both from academia and industry in recent years. Whie it is obvious that no wireless transfer technology can match the efficiency of its wired counterparts, it is still not clear what an ultimate wireless power transfer technology will look like – will it be near field? Far-field? A combination of both? Or something different? At this point it is difficult to predict it with certainty, making this a very interesting time both for research and the wireless power transfer industry.

References:

[1] H. Hertz, Electric Waves: Being Researches on the Propagation of Electric Action With Finite Velocity Through Space. New York, NY, USA: Dover, 1962.

[2] https://patents.google.com/patent/US649621A/en.

[3] https://patents.google.com/patent/US645576A/en .

[4] R. Bhutkar and S. Sapre, “Wireless Energy Transfer using Magnetic Resonance”, in 2009 Second International Conference on Computer and Electrical Engineering, https://doi.org/10.1109/ICCEE.2009.194, Dubai, United Arab Emirates.

[5] https://en.wikipedia.org/wiki/Wireless_Power_Consortium.

[6] https://en.wikipedia.org/wiki/Qi_(standard).

[7] https://airfuel.org/.

[8] https://airfuel.org/airfuel-resonant/.

[9]https://airfuel.org/frequency-.choice/#:~:text=While%20regulatory%20design%20challenges%20may,such%20as%20wireless%20power%20systems).

[10] N. Ha-Van, C.R. Simovski, F.S. Cuesta, P. Jayathurathnage, and S.A. Tretyakov, Phys. Rev. Applied 20, 014044 – Published 20 July 2023.

[11] https://web.archive.org/web/20051023080942/http://www2.univ-reunion.fr/~lcks/Old_Version/PubIAF97.htm.

[12] https://energous.com/.

[13] https://www.ossia.com/cota.

[14] https://f.hubspotusercontent30.net/hubfs/2870932/Content%20Offers/Whitepapers/Cota%20vs%20Other%20Wireless%20Power%20Technologies.pdf.

[15] https://www.electronicdesign.com/technologies/power/whitepaper/21178200/electronic-design-rethinking-wireless-power-a-closer-look-at-ossias-technology.

]]>
https://drbulja.com/wireless-power-transfer-1387/feed/ 0
Analogue phase detectors https://drbulja.com/analogue-phase-detectors-1183/ https://drbulja.com/analogue-phase-detectors-1183/#respond Wed, 09 Aug 2023 02:55:58 +0000 https://drbulja.com/?p=1183 Analogue phase detectors are essential circuits in RF and mm-wave applications when the phase difference between two signals needs to be found. There are many communication scenarios when one needs to know the phase difference between two signals – such as power combining.

Analogue phase detectors are relatively simple microwave circuits; however, it is not popular knowledge how they exactly work. In this article, basics operational principles of phase detectors are presented.

Nonlinear circuits – basic principles

In order to understand how phase detectors work, one needs to know the basic principles of nonlinear circuits, such as diodes and transistors, Fig. 1. In any nonlinear circuit, the signal at the output is not only linearly proportional to the signal at the input, but also its higher order contributions, as shown in (1).

Fig. 1 Generic nonlinear device
Fig. 1 Generic nonlinear device

analogue-phase-detector

(1) is also known as the Taylor series expansion. In (1), 𝑎𝑖 are the coefficients which are usually experimentally determined. To best describe the output signal, the number of polynomial terms should, in theory, be infinite, but, in practice, only a few terms are used. The number of polynomial terms used in practice depends on the level of nonlinearities exhibited by the device (such as a diode or transistor) and the power of the input signal.

As an illustration, let us assume that the input signal is a simple sinewave given by 𝐼𝑖𝑛 = 𝐼1𝑐𝑜𝑠(𝜔𝑡 + 𝜙1), and that the number of terms in (1) is limited to 3. The output signal 𝐼𝑜𝑢𝑡 becomes:

analogue-phase-detector-2

In other words, the output signal contains not only the frequency of the input signal, but its harmonics too, namely the products of second and third order mixing. The order of harmonic mixing determines the highest frequency of the response, which in this case is 3ω. This basic rule is used in the design of mixers, and, as we will see later in the design of analogue phase detectors too.

Mixers

Standard mixers are nonlinear devices and usually have 3 ports – Local Oscillator (LO) port, RF port and Intermediate Frequency (IF) port, as shown in Fig. 2. The signals emanating at the IF port are the signals which are a product of multiplication of the LO and RF signals.

Fig. 2 Standard mixer
Fig. 2 Standard mixer

If we assume that the RF signal is given by 𝐼𝑅𝐹 = 𝐼𝑅𝐹_𝑀𝐴𝐺𝑐𝑜𝑠(𝜔𝑡 + 𝜙𝑅𝐹) and LO signal by 𝐼𝐿𝑂 = 𝐼𝐿𝑂_𝑀𝐴𝐺𝑐𝑜𝑠(𝜔𝑡 + 𝜙𝐿𝑂) , the multiplication function of the mixers of Fig. 2 produces the following output:analogue-phase-detector-3

Where 𝑎2 is the conversion coefficient, usually provided in the datasheet of a mixer and is, as mentioned earlier, experimentally determined. Index I in (3) refers to the fact that both the RF and LO signals are cosine functions or “in-line”. The composite signal given by (3) contains products of second order mixing, which in the present case are DC and the second harmonic. The first term in (2), the DC term, is of particular use in the design of phase detectors, as will be explained later.

Phase detectors using mixers

The DC term in (3) is proportional to the phase difference between the LO and RF signals and, can, in theory, be used to construct a phase detector. However, the main issue lies with fact that the extracted phase in that case would be dependent on the correct extraction of the “amplitude” of the DC “signal”, i.e., 𝑎2 ∗(𝐼𝑅𝐹∗𝐼𝐿𝑂)/2. Theoretically, this could be considered through a careful calibration of the mixer, but that would make the phase detector constructed in this way highly dependent on the power levels applied to RF and LO ports, which increases its sensitivity. It should be noted that the second harmonic in (2) can be easily eliminated using a low-pass filter. In many instances, a grounded capacitor should suffice. By eliminating the second harmonic, the DC IF output of (3) now becomes:

analogue-phase-detector-4

To eliminate phase difference dependence on LO and RF power levels, one more piece of information is required. For this purpose, let us now assume that the LO signal is phase shifted by 𝜋/2. The LO signal now becomes:

analogue-phase-detector-4

Index Q in (5) refers to the fact that the RF and LO signals are “in quadrature”, i.e., the LO signal is a sine function, while the RF signal is a cosine function. Mixing such an LO signal with an RF signal given by 𝐼𝑅𝐹 = 𝐼𝑅𝐹_𝑀𝐴𝐺 𝑐𝑜𝑠(𝜔𝑡 + 𝜙𝑅𝐹 ) produces the following IF output:

analogue-phase-detector-5

The DC part of (6) is now equal to:

analogue-phase-detector-6

The ratio of (7) over (4) now gives:

analogue-phase-detector-7

From which the phase difference can be extracted in a simple manner by:

analogue-phase-detector-8

As obvious from (9), the measured phase difference is no longer a function of input powers of the LO and RF signals. The circuit that performs this function is given in Fig. 3.

Fig. 3 Phase detector using mixers (low-pass filters at the I and Q outputs, excluded for brevity).
Fig. 3 Phase detector using mixers (low-pass filters at the I and Q outputs, excluded for brevity).

As such, in theory, a phase detector can be constructed using two mixers and passive RF circuitry. In practice, care must be taken so that appropriate power levels are applied to the LO and RF ports; here, usually the power applied to the RF port needs to be at least 10 dB lower compared to the power applied to the LO port, for the mixers not to exceed their maximum power ratings [1]. In addition, IF ports need not be terminated in 50 Ω; the IF outputs in the present configuration are behaving as current sources and higher termination resistances are usually recommended, but that is design dependent.

Conclusions

In this short article, basic principles of operation of nonlinear circuits are presented, together with their use in the design of mixers and phase detectors. Both mixers and phase detectors are important RF/mm-wave devices used in a variety of telecommunications systems.

References:

[1] https://www.minicircuits.com/pages/pdfs/mixer1-2.pdf

[2] https://www.minicircuits.com/app/AN00-011.pdf

]]>
https://drbulja.com/analogue-phase-detectors-1183/feed/ 0
Beamforming https://drbulja.com/beamforming-1106/ https://drbulja.com/beamforming-1106/#respond Fri, 21 Jul 2023 02:40:09 +0000 https://drbulja.com/?p=1106

Continuously growing demands for higher speeds and higher throughputs places a strain on the existing technologies and, at the same time, calls for new technical solutions with a higher capacity and efficiencyof wireless communications. High throughputs are achievable using three sets of means:

  1. Bandwidth increase.
  2. Increased spectral efficiency.
  3. Spatial reuse, antenna arrays, MIMO.

In standardized wireless communications systems, bandwidths cannot be increased arbitrarily, as they are strictly controlled by standardization bodies, such as 3GPP [1]. As such, option 1 is of limited immediate influence and needs to be viewed both holistically and in the context of adequate standards.

Option 2 appears interesting and has been used to increase data rates, through the introduction of high order modulation schemes. The increase in the data rates that can be “compressed” using high order modulation schemes is, unfortunately, followed by sensitivity increase, resulting in data loss and reduced physical range at which high data rates are achievable. Ultimately, the increase in spectral efficiency has to be supported by high-performance Radio Frequency (RF) components.

Option 3 has gained widespread interest in recent years, as a reliable means to increase data throughput and is the main topic of the present article. By abandoning the single antenna (omnidirectional) concept and introducing antenna arrays [2], the concept of beamforming is born. In simple terms, beamforming

 Fig. 1 Radiation characteristics (a) of single patch antenna (c) and radiation characteristics (b) of a 32x32 antenna array (d)
Fig. 1 Radiation characteristics (a) of single patch antenna (c) and radiation characteristics (b) of a 32×32
antenna array (d)

relies on the use of several antennas, fed with signals of appropriate amplitudes and phase such that the composite signal at a particular angle exhibits constructive inference, while at all other angles it experiences destructive interference. Even though the concept of antenna arrays is old [3], it has only recently been employed in commercial telecommunications, such as 5G. Fig. 1 (a) shows the radiation characteristics of a single patch antenna (Fig. 1 (c)) operating at 11.7 GHz with a gain of 6.4 dBi. The angular width or 3 dB bandwidth of this antenna is about 90o, which is standard for this antenna type. The radiation characteristics of the antenna array consisting of 32×32 (1024) antenna elements, identical to those in Fig. 1 (c) in the direction perpendicular to the array is shown in Fig. 1 (b). As one can see from here, the gain of the array is around 34.4 dBi, with an angular bandwidth of only 3.6o. This is a tremendous increase of power being radiated towards a desired used, compared to the case of the single antenna. On the linear scale, this corresponds to the power increase of over 600 times! Also, of note is the existence of many transmission zeroes in the response of the array, which is a direct consequence of destructive interference. This is of particular use to block communications to undesired users. Fig. 1 (b) shows the radiation characteristics for the case when both the azimuthal and elevation angles are equal to zero. This case occurs when the signals feed each antenna element are of the same phase. However, antenna arrays can do much more – they can steer the radiated beam, usually between -45o to 45o, in both azimuthal and elevation planes but this is application dependent. Beam steering is accomplished by the addition of a phase shifter behind each antenna element, as shown in Fig. 2.

Fig. 2 Antenna array with phase shifters.
Fig. 2 Antenna array with phase shifters.
 Fig. 3 Radiation characteristics of an antenna beam of antenna array of Fig. 1 (d) steered to (a) -15o and (b) -45o
Fig. 3 Radiation characteristics of an antenna beam of antenna array of Fig. 1 (d) steered to (a) -15o and (b) -45o

Examples of the beam being steered to -15o and -45o in the azimuthal plane are shown in Fig. 3. Antenna arrays appear to solve many important communication problems and their advantages can be summarized as:

  1. Gain increase and, hence, throughput.
  2. Relatively simple construction – what is required is to bring antenna elements in a close proximity to each other (to avoid grating lobes, antenna spacing should be around half-wavelength at frequency of operation).

However, these antennas are not without drawbacks. Below, some are listed:

  1. In the case of very large antenna arrays, a great deal power is lost in the distribution network, which usually limits the size of the array.
  2. The requirement for a phase shifter for beam steering behind each antenna element leads to high implementation costs, as adequate performing phase shifters are expensive.
  3. Due to their principles of operation based on interference, beam steering beyond -60o and 60o is usually very challenging, due to the appearance of grating lobes.

In order to overcome these drawbacks of antenna arrays, a variety of watered-down solutions have been developed, such as switched beam antennas (Butler matrix), which consists of switches and fixed value phase shifters. Such a solution, even though appropriate for many applications, suffers from the same problem as antenna arrays, such as performance degradation due to grating lobes.

An interesting, but often overlooked solution for beamforming is the Luneburg lens [4]. The traditional Luneburg lens is shown in Fig. 4 and is of a spherical shape. The lens is a natural beamformer as it converts a point EM source into a collimated (plane) wave at a diametrically opposite direction, provided that the sources is placed on its edge, as shown in Fig. 4 (a). The correct operation of the Luneburg lens is achievable provided that the gradient of dielectric permittivity follows the equation given in Fig. 4 (b). In other words, an ideal Luneburg lens has a dielectric permittivity of 2 at its centre and 1 at its edges. This condition can be very easily met in practice by using a variety of fabrication techniques, such as 3D printing [5], however,

Fig. 4 Luneburg lens; Radiation direction as function of the position of the point source (a) and Dielectric permittivity gradient (b)
Fig. 4 Luneburg lens; Radiation direction as function of the position of the point source (a) and Dielectric permittivity gradient (b)

other techniques could also be employed. Beam steering with a Luneburg lens can be achieved using a switching matrix connected to several points on the sphere of the lens. Through a selection of appropriate excitation point sources using a switching matrix, a beam can be created in a desired direction. The gain provided by the Luneburg lens is fully dependent on the size of the lens and usually, the bigger the lens, the greater the gain. As an example of the achievable antenna gains, a lens antenna with a diameter of 6𝜆0 provides a gain of almost 16 dBi [5]. An interesting feature of the Luneburg lens is the low-level of sidelobes, compared to standard antenna arrays and scanning angles from, theoretically, -90o to 90o. Usually, in practice, this is reduced to the ground effect to -75o to 75o, but, in general, can be designed for higher scanning angles.

Compared to antenna arrays, the main advantages of the Luneburg lens antenna are:

  1. Very simple construction.
  2. Higher power handling capability, due to the use of switches, rather than varactors in phase shifters. Wide scanning angles and low level of sidelobes.

However, they also exhibit some drawbacks, such as:

  1. Luneburg lens antennas are volumetric antennas, as opposed to antennas arrays, which are of a very low profile. This is usually not a problem at high frequencies but is quite challenging for lower frequencies.
  2. Integration of the Luneburg lens antenna with Transceiver (TRX) circuitry is more challenging compared to antenna arrays.

Despite these disadvantages, the Luneburg lens antenna is gaining popularity, particularly in the context of mm-wave 5G communications.

In conclusion, beamforming is an attractive way to increase data throughput in communications networks and it can be implemented in several ways. However, choosing the right beamformer is ultimately dependent on the application and great care needs to be exercised prior to realisation.

References:

[1] https://www.3gpp.org/.

[2] S. Yamaguchi, H. Nakamizo, S. Shinjo, K. Tsutsumi, T. Fukasawa and H. Miyashita, “Development of active phased array antenna for high SHF wideband massive MIMO in 5G,” 2017 IEEE International Symposium on Antennas and Propagation & USNC/URSI National Radio Science Meeting, 2017, pp. 1463-1464.

[3] https://en.wikipedia.org/wiki/History_of_smart_antennas.

[4] https://en.wikipedia.org/wiki/Luneburg_lens.

[5] S. Bulja et al., “Millimeter-wave 3D printed Luneburg Lens Antenna”, in IEEE RADIO, Reunion, France, September 2019.

]]>
https://drbulja.com/beamforming-1106/feed/ 0
Rectennas – Limitations and Influences (Part II) https://drbulja.com/rectennas-limitations-and-influences-part-ii-903/ https://drbulja.com/rectennas-limitations-and-influences-part-ii-903/#respond Tue, 13 Jun 2023 01:35:50 +0000 https://drbulja.com/?p=903 With reference to Fig. 2 (a) in Part I, each parameter influencing the efficiency of a rectifier operating at 10 GHz is examined. As a reminder, its efficiency is examined as a function of a) influence of threshold and breakdown voltages; b) influence of loss; c) influence of diode’s parasitic resistance; d) influence of diode’s junction capacitance; e) influence of termination type and f) influence of package parasitics.

A. Influence of threshold and breakdown voltages on efficiency

Let us now assume that all components in the circuit of Fig. 2 (a) of Part I are ideal apart from the threshold and breakdown voltage, which assume finite values. Table I below lists the assumed values, efficiencies recorded and the input power level at which the efficiency was recorded. As can be seen, maximum efficiency is obtained for the case when the breakdown voltage is high and when the threshold voltage is low. This is intuitively understandable and points to one way of increasing efficiency in rectifying circuits.

Table I Influence of threshold and breakdown voltage
Table I Influence of threshold and breakdown voltage

B. Influence of loss

In a similar way compared to A., the influence of loss is also examined. For this purpose, 4 cases are identified, as given in Table II. As can be seen, losses negatively affect the overall efficiency, with efficiency dropping from 81% for 0.4 dB of insertion losses to 61% for 1.6 dB losses. The marginal increase in the maximum poweris explained by the fact that losses reduce the maximum power reaching the active device (diode in this case).

Table II Influence of losses
Table II Influence of losses

C. Influence of diode’s parasitic resistance

As with the previous two cases, a compiled Table III shows the effect of increase of parasitic resistance. As can be seen, the increase in the diode’s parasitic resistance reduces the overall efficiency of the rectifier, however, the reduction in the efficiency is not as dramatic as it was for case B, investigated previously.

Table III Influence of diode’s parasitic resistance
Table III Influence of diode’s parasitic resistance

D. Influence of diode’s junction capacitance

Here, the influence of diode’s junction capacitance is investigated for frequencies up to 10 GHz. For the purpose of this exercise, it was assumed that the diode’s series resistance is Rs = 10 Ω and storage time TT = 10 ps. The diode’s zero voltage junction capacitance is assumed to be Cj0 = 0.2 pF. Table IV shows the effect of the finite capacitance on the overall efficiency as function of frequency. As can be seen, the influence of the diode’s capacitance negatively affects both the efficiency, maximum power and it also limits the rectifier’s dynamic range.

Table IV influence of diode’s junction capacitance
Table IV influence of diode’s junction capacitance

E. Influence of termination type

The implementation of a low-pass filter in the circuit of a rectifier strongly influences overall efficiencies. Usually, a simple capacitor is used to block high order harmonics (2nd and 3rd being of highest significance), however, in certain application a class-F terminations can also be used, which, effectively, provide a short circuit at harmonic frequencies, Fig. 1 or Part I. In Table V below, we provide a comparison between an ideal class F circuit providing a short termination at the second and third harmonics against a standard capacitor termination.

Fig. 1 Rectifier with a capacitive termination
Fig. 1 Rectifier with a capacitive termination
Fig. 1 Rectifier with class-F termination
Fig. 1 Rectifier with class-F termination
Table V Influence of termination type
Table V Influence of termination type

As can be seen from Table V, class F terminations have the potential to increase overall efficiency, even though marginally. However, in this case the exact values of efficiency will be determined by the losses of the capacitive and class-F termination implementation types.

F. Influence of package parasitics

For this purpose, the influence on the designed single diode rectifier was examined for the case when the package parasitics are included in the diode model. For this purpose, an ideal capacitive termination of C = 100 pF was used in both cases. The diode case was assumed to be SOT-23 package [1]. Table VI shows the obtained results. As can be seen, the package does not influence overall efficiency, however, it does influence the maximum power at which the highest efficiency is observed.

Table VI Influence of package parasitics
Table VI Influence of package parasitics

In conclusion, the choice of the active device for the implementation in the circuit of a rectifier is very important as it has a significant impact on overall efficiency, dynamic range, and maximum input powers. As such, choosing the correct diode needs to be taken with great care.

References:

[1] https://en.wikipedia.org/wiki/Small-outline_transistor

]]>
https://drbulja.com/rectennas-limitations-and-influences-part-ii-903/feed/ 0
Rectennas – Limitations and Influences (Part I) https://drbulja.com/rectennas-limitations-and-influences-part-i-891/ https://drbulja.com/rectennas-limitations-and-influences-part-i-891/#respond Mon, 12 Jun 2023 09:05:29 +0000 https://drbulja.com/?p=891 Functionally, a rectenna is a circuit comprising of an antenna and a rectifying circuit, that can be used to convert Electro-Magnetic (EM) energy into DC. The origins of a rectenna can be traced all the way back to [1], where a rectenna was used to power a helicopter model [2]. The interest in rectennas have soared in recent years, driven by the rise of new ways of energy harvesting. Of note here is the use of Space Solar Power Satellites (SSPS), which collect and convert solar energy into electrical energy, which is beamed down to a ground station on Earth [3]. Here, the efficiency of conversion from EM to DC is of high importance as inefficiencies result in power waste and increased heat. The principle of EM to dc conversion is very simple, as presented in Fig. 1.

Fig. 1 Simplified block diagram of EM to dc converter
Fig. 1 Simplified block diagram of EM to dc converter

The crux for the correct operation lies with the nonlinear device. The nonlinear device acts as a multiplier, thus creating signals at the output with frequencies at integer multiples of the frequency of the corresponding input signal. If we assume that the frequency of the signal at the input if f1 , the nonlinear device creates signals at frequencies 2f1, 3f1, … and also n*f1 +/- mf1 with n and m being integers. As such, as the output, in addition to the dc signal (f1 -f1) signals at frequencies of f1, 2f1, 3f1…. High frequencies are eliminated by using a Low-Pass Filter (LPF) and its simplest implementation is a capacitor, as shown in simplified, standard rectifying circuits, employing 1, 2 and 4 diodes are shown in Fig. 2. In this figure, the chosen nonlinear device is a diode. The most commonly used diode is a Schottky diode or any other low or zero-barrier diode, in order to increase the efficiency of rectification. The choice between the single, double or 4-diode rectifier is dependent on the maximum input power that the device is expected to handle. Usually, for low power applications a single diode circuit is used, while high power applications usually require rectifiers with a greater number of diodes. However, this has to be carefully evaluated, as diodes with higher breakdown voltage may prove to provide a greater efficiency in a single diode rectifier circuit as compared to rectifiers with a greater number of low breakdown voltage diodes.

Fig. 2. Simplified rectifying circuits; (a), single diode
Fig. 2. Simplified rectifying circuits; (a) single diode
Fig. 2. Simplified rectifying circuits; (b) 2-diode
Fig. 2. Simplified rectifying circuits; (b) 2-diode
Fig. 1. Simplified rectifying circuits; (c) 4-diode
Fig. 2. Simplified rectifying circuits; (c) 4-diode

With reference to the single-diode rectifier circuit, we will examine the influence of each diode parameter on the efficiency of the rectifier. For this purpose, we examine the efficiency of a rectifier operating at a frequency of 10 GHz, as a function of: a) influence of threshold and breakdown voltages; b) influence of loss; c) influence of diode’s parasitic resistance; d) influence of diode’s junction capacitance; e) influence of termination type and f) influence of package parasitics. This will be done in Part II, with reference to the circuit of Fig. 2 (a).

References:

[1] US 3434678 Microwave to DC Converter William C. Brown, et al, filed 5 May 1965, granted 25 March 1969

[2]https://en.wikipedia.org/wiki/Rectenna#:~:text=A%20rectenna%20(rectifying%20antenna)%20is,trans mit%20power%20by%20radio%20waves.

[3] N.C. Au, D. M. Nguyen, T. D. Nhu and C. Seo, “A 5.8 GHz rectifier using diode connected MESFET for space solar satellite system”, in IEEE Transactions on Microwave Theory and Techniques, vol. 70, no. 10, October 2022.

Read part 2

]]>
https://drbulja.com/rectennas-limitations-and-influences-part-i-891/feed/ 0