Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
Type of Publication
Consortium
Language
  • 1
    UID:
    (DE-627)181817930X
    ISSN: 2169-3536
    Content: Non-coherent transmission from multiple transmission-reception-points (TRPs), i.e., base stations, or base station panels to a user equipment (UE) is exploited in 5G New Radio (NR) to improve downlink reliability and cell-edge throughput. Ultra reliable low-latency communications (URLLC) and enhanced Mobile BroadBand (eMBB) are prominent target use-cases for multi-TRP or multi-panel transmissions. In Third-Generation Partnership Project (3GPP) Release 17 specifications, multi-TRP-based transmissions were specified for the physical downlink control channel (PDCCH) specifically to enhance its reliability and robustness. In this work, a comprehensive account of various multi-TRP reliability enhancement schemes applicable for the 5G NR PDCCH, including the ones supported by the 3GPP Release 17 specifications, is provided. The impact of the specifications for each scheme, UE and network complexity and their utility in various use-cases is studied. Their error performances are evaluated via link-level simulations using the evaluation criteria agreed in the 3GPP proceedings. The 3GPP-supported multi-TRP PDCCH repetition schemes, and the additionally proposed PDCCH repetition and diversity schemes are shown to be effective in improving 5G NR PDCCH reliability and combating link blockage in mmWave scenarios. The link-level simulations also provide insights for the implementation of the decoding schemes for the PDCCH enhancements under different channel conditions. Analysis of the performance, complexity and implementation constraints of the proposed PDCCH transmission schemes indicate their suitability to UEs with reduced-capability or stricter memory constraints and flexible network scheduling.
    In: Institute of Electrical and Electronics Engineers, IEEE access, New York, NY : IEEE, 2013, 10(2022), Seite 97394-97407, 2169-3536
    In: volume:10
    In: year:2022
    In: pages:97394-97407
    Language: English
    URL: Volltext  (kostenfrei)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    UID:
    (DE-627)1770722114
    Format: 10
    ISSN: 2169-3536
    Content: Abstract: Describing ecosystem carbon fluxes is essential for deepening the understanding of the Earth system. However, partitioning net ecosystem exchange (NEE), i.e. the sum of ecosystem respiration (R eco ) and gross primary production (GPP), into these summands is ill-posed since there can be infinitely many mathematically-valid solutions. We propose a novel data-driven approach to NEE partitioning using a deep state space model which combines the interpretability and uncertainty analysis of state space models with the ability of recurrent neural networks to learn the complex functions governing the data. We validate our proposed approach on the FLUXNET dataset. We suggest using both the past and the future of R eco ’s predictors for training along with the nighttime NEE (NEE night ) to learn a dynamical model of R eco . We evaluate our nighttime R eco forecasts by comparing them to the ground truth NEE night and obtain the best accuracy with respect to other partitioning methods. The learned nighttime R eco model is then used to forecast the daytime R eco conditioning on the future observations of different predictors, i.e., global radiation, air temperature, precipitation, vapor pressure deficit, and daytime NEE (NEE day ). Subtracted from the NEE day , these estimates yield the GPP, finalizing the partitioning. Our purely data-driven daytime R eco forecasts are in line with the recent empirical partitioning studies reporting lower daytime R eco than the Reichstein method, which can be attributed to the Kok effect, i.e., the plant respiration being higher at night. We conclude that our approach is a good alternative for data-driven NEE partitioning and complements other partitioning methods.
    In: Institute of Electrical and Electronics Engineers, IEEE access, New York, NY : IEEE, 2013, 9(2021), Seite 107873-107883, 2169-3536
    In: volume:9
    In: year:2021
    In: pages:107873-107883
    In: extent:10
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    UID:
    (DE-627)1853485993
    Format: Illustrationen
    ISSN: 2169-3536
    In: Institute of Electrical and Electronics Engineers, IEEE access, New York, NY : IEEE, 2013, 9(2021), Seite 80603-80620, 2169-3536
    In: volume:9
    In: year:2021
    In: pages:80603-80620
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    UID:
    (DE-627)184337661X
    ISSN: 2169-3536
    Content: AI-generated images have gained in popularity in recent years due to improvements and developments in the field of artificial intelligence. This has led to several new AI generators, which may produce realistic, funny, and impressive images using a simple text prompt. DALL-E-2, Midjourney, and Craiyon are a few examples of the mentioned approaches. In general, it can be seen that the quality, realism, and appeal of the images vary depending on the used approach. Therefore, in this paper, we analyze to what extent such AI-generated images are realistic or of high appeal from a more photographic point of view and how users perceive them. To evaluate the appeal of several state-of-the-art AI generators, we develop a dataset consisting of 27 different text prompts, with some of them being based on the DrawBench prompts. Using these prompts we generated a total of 135 images with five different AI-Text-To-Image generators. These images in combination with real photos form the basis of our evaluation. The evaluation is based on an online subjective study and the results are compared with state-of-the-art image quality models and features. The results indicate that some of the included generators are able to produce realistic and highly appealing images. However, this depends on the approach and text prompt to a large extent. The dataset and evaluation of this paper are made publicly available for reproducibility, following an Open Science approach.
    In: Institute of Electrical and Electronics Engineers, IEEE access, New York, NY : IEEE, 2013, 11(2023), Seite 38999-39012, 2169-3536
    In: volume:11
    In: year:2023
    In: pages:38999-39012
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    UID:
    (DE-627)1759056308
    ISSN: 2169-3536
    Content: Abstract:Identifying the direction of emotional influence in a dyadic dialogue is of increasing interest in the psychological sciences with applications in psychotherapy, analysis of political interactions, or interpersonal conflict behavior. Facial expressions are widely described as being automatic and thus hard to be overtly influenced. As such, they are a perfect measure for a better understanding of unintentional behavior cues about socio-emotional cognitive processes. With this view, this study is concerned with the analysis of the direction of emotional influence in dyadic dialogues based on facial expressions only. We exploit computer vision capabilities along with causal inference theory for quantitative verification of hypotheses on the direction of emotional influence, i.e., cause-effect relationships, in dyadic dialogues. We address two main issues. First, in a dyadic dialogue, emotional influence occurs over transient time intervals and with intensity and direction that are variant over time. To this end, we propose a relevant interval selection approach that we use prior to causal inference to identify those transient intervals where causal inference should be applied. Second, we propose to use fine-grained facial expressions that are present when strong distinct facial emotions are not visible. To specify the direction of influence, we apply the concept of Granger causality to the time-series of facial expressions over selected relevant intervals. We tested our approach on newly, experimentally obtained data. Based on quantitative verification of hypotheses on the direction of emotional influence, we were able to show that the proposed approach is promising to reveal the cause-effect pattern in various instructed interaction conditions. Keywords: Psychology, Feature extraction, Transient analysis, Computer vision, Visualization, Licenses, Facial muscles
    In: Institute of Electrical and Electronics Engineers, IEEE access, New York, NY : IEEE, 2013, 9(2021), Seite 73780 - 73790, 2169-3536
    In: volume:9
    In: year:2021
    In: pages:73780 - 73790
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 6
    UID:
    (DE-627)1782425705
    Format: Diagramme
    ISSN: 2169-3536
    Content: Due to the energy transition and the distribution of electricity generation, distribution power systems gain a lot of attention as their importance increases and new challenges in operation emerge. The integration of renewables and electric vehicles for instance leads to manifold changes in the system, e.g. participation in provision of ancillary services. To solve these challenges artificial intelligence provides a variety of solutions based on the increase in sensor data and computational capability. This paper provides a systematic overview of some of the most recent studies applying artificial intelligence methods to distribution power system operation published during the last 10 years. Based on that, a general guideline is developed to support the reader in finding a suitable AI technique for a specific operation task. Therefore, four general metrics are proposed to give an orientation of the requirements of each application. Thus, a conclusion can be drawn presenting suitable algorithms for each operation task.
    Note: Sonstige Körperschaft: Technische Universität Hamburg , Sonstige Körperschaft: Technische Universität Hamburg, Institut für Elektrische Energietechnik
    In: Institute of Electrical and Electronics Engineers, IEEE access, New York, NY : IEEE, 2013, 9(2021) vom: 2. Nov., Seite 150098-150119, 2169-3536
    In: volume:9
    In: year:2021
    In: day:2
    In: month:11
    In: pages:150098-150119
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 7
    UID:
    (DE-627)1828142700
    Format: Illustrationen, Tabellen
    ISSN: 2169-3536
    In: Institute of Electrical and Electronics Engineers, IEEE access, New York, NY : IEEE, 2013, 10(2022), Seite 22429-22440, 2169-3536
    In: volume:10
    In: year:2022
    In: pages:22429-22440
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 8
    UID:
    (DE-627)1860286410
    ISSN: 2169-3536
    Content: In a non-professional environment, multi-camera recordings of theater performances or other stage shows are difficult to realize, because amateurs are usually untrained in camera work and in using a vision mixing desk that mixes multiple cameras. This can be remedied by a production process with high-resolution cameras where recordings of image sections from long shots or medium-long shots are manually or automatically cropped in post-production. For this purpose, Gandhi et al. presented a single-camera system (referred to as Gandhi Recording System in the paper) that obtains close-ups from a high-resolution recording from the central perspective. The proposed system in this paper referred to as “Proposed Recording System” extends the method to four perspectives based on a Reference Recording System from professional TV theater recordings from the Ohnsorg Theater. Rules for camera selection, image cropping, and montage are derived from the Reference Recording System in this paper. For this purpose, body and pose recognition software is used and the stage action is reconstructed from the recordings into the stage set. Speakers are recognized by detecting lip movements and speaker changes are identified using audio diarization software. The Proposed Recording System proposed in this paper is practically instantiated on a school theater recording made by laymen using four 4K cameras. An automatic editing script is generated that outputs a montage of a scene. The principles can also be adapted for other recording situations with an audience, such as lectures, interviews, discussions, talk shows, gala events, award ceremonies, and the like. More than 70 % of test persons confirm in an online study the added value of the perspective diversity of four cameras of the Proposed Recording System versus the single-camera method of Gandhi et al.
    In: Institute of Electrical and Electronics Engineers, IEEE access, New York, NY : IEEE, 2013, 11(2023) vom: 1. Sept., Seite 96673-96692, 2169-3536
    In: volume:11
    In: year:2023
    In: day:1
    In: month:09
    In: pages:96673-96692
    Language: English
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 9
    UID:
    (DE-627)1054638195
    ISSN: 2169-3536
    In: Institute of Electrical and Electronics Engineers, IEEE access, New York, NY : IEEE, 2013, 7(2019), Seite 20083-20090, 2169-3536
    In: volume:7
    In: year:2019
    In: pages:20083-20090
    Language: English
    URL: Volltext  (kostenfrei)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 10
    UID:
    (DE-627)1813779929
    ISSN: 2169-3536
    Content: The paper presents AVQBits, a versatile, bitstream-based video quality model. It can be applied in several contexts such as video service monitoring, evaluation of video encoding quality, of gaming video QoE, and even of omnidirectional video quality. In the paper, it is shown that AVQBits predictions closely match video quality ratings obained in various subjective tests with human viewers, for videos up to 4K-UHD resolution (Ultra-High Definition, 3840 x 2180 pixels) and framerates up 120 fps. With the different variants of AVQBits presented in the paper, video quality can be monitored either at the client side, in the network or directly after encoding. The no-reference AVQBits model was developed for different video services and types of input data, reflecting the increasing popularity of Video-on-Demand services and widespread use of HTTP-based adaptive streaming. At its core, AVQBits encompasses the standardized ITU-T P.1204.3 model, with further model instances that can either have restricted or extended input information, depending on the application context. Four different instances of AVQBits are presented, that is, a Mode 3 model with full access to the bitstream, a Mode 0 variant using only metadata such as codec type, framerate, resoution and bitrate as input, a Mode 1 model using Mode 0 information and frame-type and -size information, and a Hybrid Mode 0 model that is based on Mode 0 metadata and the decoded video pixel information. The models are trained on the authors’ own AVT-PNATS-UHD-1 dataset described in the paper. All models show a highly competitive performance by using AVT-VQDB-UHD-1 as validation dataset, e.g., with the Mode 0 variant yielding a value of 0.890 Pearson Correlation, the Mode 1 model of 0.901, the hybrid no-reference mode 0 model of 0.928 and the model with full bitstream access of 0.942. In addition, all four AVQBits variants are evaluated when applying them out-of-the-box to different media formats such as 360° video, high framerate (HFR) content, or gaming videos. The analysis shows that the ITU-T P.1204.3 and Hybrid Mode 0 instances of AVQBits for the considered use-cases either perform on par with or better than even state-of-the-art full reference, pixel-based models. Furthermore, it is shown that the proposed Mode 0 and Mode 1 variants outperform commonly used no-reference models for the different application scopes. Also, a long-term integration model based on the standardized ITU-T P.1203.3 is presented to estimate ratings of overall audiovisual streaming Quality of Experience (QoE) for sessions of 30 s up to 5 min duration. In the paper, the AVQBits instances with their per-1-sec score output are evaluated as the video quality component of the proposed long-term integration model. All AVQBits variants as well as the long-term integration module are made publicly available for the community for further research.
    In: Institute of Electrical and Electronics Engineers, IEEE access, New York, NY : IEEE, 2013, 10(2022), Seite 80321-80351, 2169-3536
    In: volume:10
    In: year:2022
    In: pages:80321-80351
    Language: English
    URL: Volltext  (kostenfrei)
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages