Enhancements: Release Notes Machine Learning Classifier

Created on 30 May 2020  ·  6Comments  ·  Source: kubernetes/enhancements

Enhancement Description

  • One-line enhancement description (can be used as a release note):
    Machine learning binary (good/bad) classification and process enhancement to continuously improve the release notes.
  • Kubernetes Enhancement Proposal: TBD
  • Primary contact (assignee): @saschagrunert
  • Responsible SIGs: @kubernetes/sig-release
  • Enhancement target (which target equals to which milestone): v1.20

    • Alpha release target (v1.20)

    • Beta release target (v1.21)

    • Stable release target (v1.22)

Overview

_This is the outcome of a brainstorming with @puerco about further enhancements of the release notes._

The idea is to build a continuous release notes improvement process to train a machine learning model to classify release notes as good or bad. The input for the model should be created continuously during the whole release cycle by Release Notes Team of SIG Release. Enhancements to the release engineering tooling will support the overall process change. As a side benefit, it would be now possible to fix bad release notes without having a need to change the associated PR body. This way the "release notes fixing period" will spread from the end of the cycle (code thaw) to the whole cycle. After the machine learning model is available and trained, a prow plugin will provide feedback about the classification of the release note directly at PR creation time.

Some pre-work is already done and can be found in the corresponding blog post.

sirelease

Most helpful comment

Before the classifier is finished and we have collected enough data to run the model, the responsibilities of the Release Notes Team will necessarily need to change from our current duties which until now were to run the release notes tooling and submitting its output as a PR.

To build the classifier, the release team would split the work of reviewing the release notes that have been submitted so far and start producing the improvement data. This would be a rather large effort during the Alpha and Beta stages. As the classifier confidence improves, fewer and fewer notes will need to be edited by hand as the note authors will eventually be required to achieve a note of a minimum quality at PR submission.

When the classifier reaches maturity and we have collected enough samples, the Release Notes Team would become the supervisors of the bot, keeping an eye on what it gets fed and helping it where the model falls short. As the improvement loop would be constantly running, notes that did not get flagged as being of lower quality would be, again, fixed by hand and fed again into the the training process. Improving the original model and others that might follow in the future which could take into consideration other PR elements: labels, area/sig, and even additional metadata added in during the release cycle.

All 6 comments

/sig release

/assign

Extending the release notes review period to the whole release cycle would give the team more time to work on improving the quality of the notes as per the Contributor's Guide guidelines.

Our tools would give the team an easy way to improve a note deemed of low quality and produce the needed data without modifying the original PR. The additional data could also be used to add more context and metadata to he release notes.

As time goes by, this loop of release notes improvements will start producing a data points consisting of the original release notes and the additional data used to improve it. This byproduct will eventually become a dataset large enough to train the AI model so it can better assist the team in the release notes classification and finally, as Sascha said above, at PR creation time.

Before the classifier is finished and we have collected enough data to run the model, the responsibilities of the Release Notes Team will necessarily need to change from our current duties which until now were to run the release notes tooling and submitting its output as a PR.

To build the classifier, the release team would split the work of reviewing the release notes that have been submitted so far and start producing the improvement data. This would be a rather large effort during the Alpha and Beta stages. As the classifier confidence improves, fewer and fewer notes will need to be edited by hand as the note authors will eventually be required to achieve a note of a minimum quality at PR submission.

When the classifier reaches maturity and we have collected enough samples, the Release Notes Team would become the supervisors of the bot, keeping an eye on what it gets fed and helping it where the model falls short. As the improvement loop would be constantly running, notes that did not get flagged as being of lower quality would be, again, fixed by hand and fed again into the the training process. Improving the original model and others that might follow in the future which could take into consideration other PR elements: labels, area/sig, and even additional metadata added in during the release cycle.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Was this page helpful?
0 / 5 - 0 ratings

Related issues

justinsb picture justinsb  ·  11Comments

AndiLi99 picture AndiLi99  ·  13Comments

euank picture euank  ·  13Comments

wlan0 picture wlan0  ·  9Comments

msau42 picture msau42  ·  13Comments