Your email was sent successfully. Check your inbox.

An error occurred while sending the email. Please try again.

Proceed reservation?

Export
Filter
  • World Scientific Pub Co Pte Ltd  (5)
Type of Medium
Publisher
  • World Scientific Pub Co Pte Ltd  (5)
Language
Years
  • 1
    Online Resource
    Online Resource
    World Scientific Pub Co Pte Ltd ; 2021
    In:  International Journal of Software Engineering and Knowledge Engineering Vol. 31, No. 08 ( 2021-08), p. 1099-1118
    In: International Journal of Software Engineering and Knowledge Engineering, World Scientific Pub Co Pte Ltd, Vol. 31, No. 08 ( 2021-08), p. 1099-1118
    Abstract: Requirements-to-code tracing is an important and costly task that creates trace links from requirements to source code. These trace links help engineers reduce the time and complexity of software maintenance. Code comments play an important role in software maintenance tasks. However, few studies have focused intensively on the impact of code comments on requirements-to-code trace links creation. Different types of comments have different purposes, so how different types of code comments provide different improvements for requirements-to-code trace links creation? We focus on learning whether code comments and different types of comments can improve the quality of trace links creation. This paper presents a study to evaluate the contribution of code comments and different types of code comments to the creation of trace links. More specifically, this paper first experimentally evaluates the impact of code comments on requirements-to-code trace links creation, and then divides code comments into six categories to evaluate its impact on trace links creation. The results show that the precision increases by an average of 15% (based on the same recall) after adding code comments (even for different trace links creation techniques), and the type of Purpose comments contributes more to the tracing task than the other five. This empirical study provides evidence that code comments are effective in tracing links creation, and different types of code comments contribute differently. Purpose comments can be used to improve the accuracy of requirements-to-code trace links creation.
    Type of Medium: Online Resource
    ISSN: 0218-1940 , 1793-6403
    Language: English
    Publisher: World Scientific Pub Co Pte Ltd
    Publication Date: 2021
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 2
    Online Resource
    Online Resource
    World Scientific Pub Co Pte Ltd ; 2020
    In:  International Journal of Software Engineering and Knowledge Engineering Vol. 30, No. 05 ( 2020-05), p. 649-668
    In: International Journal of Software Engineering and Knowledge Engineering, World Scientific Pub Co Pte Ltd, Vol. 30, No. 05 ( 2020-05), p. 649-668
    Abstract: In recent years, deep learning models have shown great potential in source code modeling and analysis. Generally, deep learning-based approaches are problem-specific and data-hungry. A challenging issue of these approaches is that they require training from scratch for a different related problem. In this work, we propose a transfer learning-based approach that significantly improves the performance of deep learning-based source code models. In contrast to traditional learning paradigms, transfer learning can transfer the knowledge learned in solving one problem into another related problem. First, we present two recurrent neural network-based models RNN and GRU for the purpose of transfer learning in the domain of source code modeling. Next, via transfer learning, these pre-trained (RNN and GRU) models are used as feature extractors. Then, these extracted features are combined into attention learner for different downstream tasks. The attention learner leverages from the learned knowledge of pre-trained models and fine-tunes them for a specific downstream task. We evaluate the performance of the proposed approach with extensive experiments with the source code suggestion task. The results indicate that the proposed approach outperforms the state-of-the-art models in terms of accuracy, precision, recall and F-measure without training the models from scratch.
    Type of Medium: Online Resource
    ISSN: 0218-1940 , 1793-6403
    Language: English
    Publisher: World Scientific Pub Co Pte Ltd
    Publication Date: 2020
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 3
    Online Resource
    Online Resource
    World Scientific Pub Co Pte Ltd ; 2021
    In:  International Journal of Software Engineering and Knowledge Engineering Vol. 31, No. 09 ( 2021-09), p. 1299-1327
    In: International Journal of Software Engineering and Knowledge Engineering, World Scientific Pub Co Pte Ltd, Vol. 31, No. 09 ( 2021-09), p. 1299-1327
    Abstract: Application Programming Interfaces (APIs) play an important role in modern software development. Developers interact with APIs on a daily basis and thus need to learn and memorize those APIs suitable for implementing the required functions. This can be a burden even for experienced developers since there exists a mass of available APIs. API recommendation techniques focus on assisting developers in selecting suitable APIs. However, existing API recommendation techniques have not taken the developers personal characteristics into account. As a result, they cannot provide developers with personalized API recommendation services. Meanwhile, they lack the support for self-defined APIs in the recommendation. To this end, we aim to propose a personalized API recommendation method that considers developers’ differences. Our API recommendation method is based on statistical language. We propose a model structure that combines the N-gram model and the long short-term memory (LSTM) neural network and train predictive models using API invoking sequences extracted from GitHub code repositories. A general language model trained on all sorts of code data is first acquired, based on which two personalized language models that recommend personalized library APIs and self-defined APIs are trained using the code data of the developer who needs personalized services. We evaluate our personalized API recommendation method on real-world developers, and the experimental results show that our approach achieves better accuracy in recommending both library APIs and self-defined APIs compared with the state-of-the-art. The experimental results also confirm the effectiveness of our hybrid model structure and the choice of the LSTM’s size.
    Type of Medium: Online Resource
    ISSN: 0218-1940 , 1793-6403
    Language: English
    Publisher: World Scientific Pub Co Pte Ltd
    Publication Date: 2021
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 4
    Online Resource
    Online Resource
    World Scientific Pub Co Pte Ltd ; 2022
    In:  International Journal of Software Engineering and Knowledge Engineering Vol. 32, No. 08 ( 2022-08), p. 1203-1228
    In: International Journal of Software Engineering and Knowledge Engineering, World Scientific Pub Co Pte Ltd, Vol. 32, No. 08 ( 2022-08), p. 1203-1228
    Abstract: API recommendation is crucial to improve programmers’ productivity. A lot of work has been proposed to improve the accuracy of API recommendations. In the existing work, many metrics, such as Precision, Recall, and MAP are used to evaluate the accuracy of the recommendation. These metrics can well reflect the ability to distinguish useful APIs from the candidate set, but they cannot evaluate the ability to determine the priority of useful APIs with each other. The priority between related APIs directly determines whether the recommended results are practical for developers. From this perspective, inspired by the sequence-aware recommendation, this paper constructs an API recommendation method with sequence awareness and designs new metrics to evaluate the method’s ability to determine the priority of useful APIs. The experimental results show that, compared with the baseline, the proposed method not only achieves better results on the common widely-used metrics but also outperforms the baseline method concerning the newly proposed sequence metrics.
    Type of Medium: Online Resource
    ISSN: 0218-1940 , 1793-6403
    Language: English
    Publisher: World Scientific Pub Co Pte Ltd
    Publication Date: 2022
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
  • 5
    Online Resource
    Online Resource
    World Scientific Pub Co Pte Ltd ; 2022
    In:  International Journal of Software Engineering and Knowledge Engineering Vol. 32, No. 10 ( 2022-10), p. 1499-1526
    In: International Journal of Software Engineering and Knowledge Engineering, World Scientific Pub Co Pte Ltd, Vol. 32, No. 10 ( 2022-10), p. 1499-1526
    Abstract: Studies have confirmed the robust performance of machine learning classifiers for various source code modeling tasks. In general, machine learning approaches are incapable of handling imbalanced datasets, since they are sensitive to the choice of diverse classes. Therefore, these approaches may lean towards the classes with a large percentage of observations. In this work, we investigate and explore the impact of balanced and imbalanced learning on source code suggestion task otherwise known as code completion, covering a large number of imbalanced classes. We further explore the impact of vocabulary size on modeling performance. First, we provide the essentials to formulate the problem of source code suggestion as a classification task and investigate the level of imbalanced classes. Second, we train the four most adapted neural language models as a baseline to assess the modeling performance. Third, we impose two diverse class balancing techniques, TomekLinks and AllKNN, to balance the datasets and evaluate their impact on the modeling performance. Finally, we trained these models with a weighted imbalanced learning approach and compared the performance with balanced learning approaches. Additionally, we train models by varying the vocabulary size to study their impact. In total, we trained 230 models on 10 real-world software projects and extensively evaluated these models with widely used performance metrics such as Precision, Recall, FScore, mean reciprocal rank (MRR), and Receiver operating characteristics (ROC). Additionally, we employed ANOVA statistical analysis to study the statistical significance and differences between these approaches. This study has demonstrated that the modeling performance decreases during balanced model training, whereas the weighted imbalance training produces comparable results and is more efficient in terms of time cost. Additionally, this study exhibits that a large size of vocabulary does not necessarily improve the modeling performance when out-of-vocabulary predictions are disregarded.
    Type of Medium: Online Resource
    ISSN: 0218-1940 , 1793-6403
    Language: English
    Publisher: World Scientific Pub Co Pte Ltd
    Publication Date: 2022
    Library Location Call Number Volume/Issue/Year Availability
    BibTip Others were also interested in ...
Close ⊗
This website uses cookies and the analysis tool Matomo. Further information can be found on the KOBV privacy pages