Compare commits

..

61 Commits

Author SHA1 Message Date
Kerwin Bryant
e632ba8824
Merge branch 'main' into add-file-tree-to-file-view-page 2025-02-19 16:48:33 +08:00
wxiaoguang
c2e23d3301
Fix PR web route permission check (#33636)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
See the FIXME comment in code. Otherwise, if a repo's issue unit is
disabled, then the PRs can't be edited anymore.

By the way, make the permission log output look slightly better.

---------

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: metiftikci <metiftikci@hotmail.com>
2025-02-19 00:55:19 +00:00
metiftikci
84d2159ef6
fix: add missing locale (#33641)
this removed in #23113 but still using in `head_navbar.tmpl`
2025-02-18 16:29:17 -08:00
Kerwin Bryant
ce65613690
Fix Untranslated Text on Actions Page (#33635)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
Fix the problem of untranslated text on the actions page

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-18 11:29:08 +00:00
Guillaume
748b731612
Improve button layout on small screens (#33633)
Fix #33160 

Better "New Repository" & "New Migration" buttons on home page.

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-18 11:02:40 +00:00
yp05327
241f799edf
Update README screenshots (#33347)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
@lunny @techknowlogick
Wait for the update of https://dl.gitea.com/screenshots
Can you move all old screenshots into
https://dl.gitea.com/screenshots/old ?
Then run the action to upload new screenshots to
https://dl.gitea.com/screenshots

Follow #33149.
As I mentioned here:
https://github.com/go-gitea/gitea/pull/33149#issuecomment-2581787057,
the prepare process is almost finished.
The backend technical is using newly added `workflow_dispatch` feature
for Gitea Action to take the screenshots automatically.
Then we can easily sync the screenshots to the latest version without
annoying manual work.
Get more information from https://gitea.com/gitea/deployment
2025-02-18 00:10:30 -08:00
Lunny Xiao
9f560d47c9
Make actions URL in commit status webhooks absolute (#33620)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
Gitea Actions generated target url doesn't contain host and port. So we
need to include them for external webhook visiting.

Fix #33603

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-18 02:20:18 +00:00
wxiaoguang
15e020eec8
Refactor error system (#33626) 2025-02-17 12:41:03 -08:00
Lunny Xiao
7df09e31fa
Move issue pin to an standalone table for querying performance (#33452)
Noticed a SQL in gitea.com has a bigger load. It seems both `is_pull`
and `pin_order` are not indexed columns in the database.

```SQL
SELECT `id`, `repo_id`, `index`, `poster_id`, `original_author`, `original_author_id`, `name`, `content`, `content_version`, `milestone_id`, `priority`, `is_closed`, `is_pull`, `num_comments`, `ref`, `pin_order`, `deadline_unix`, `created_unix`, `updated_unix`, `closed_unix`, `is_locked`, `time_estimate` FROM `issue` WHERE (repo_id =?) AND (is_pull = 0) AND (pin_order > 0) ORDER BY pin_order
```

I came across a comment
https://github.com/go-gitea/gitea/pull/24406#issuecomment-1527747296
from @delvh , which presents a more reasonable approach. Based on this,
this PR will migrate all issue and pull request pin data from the
`issue` table to the `issue_pin` table. This change benefits larger
Gitea instances by improving scalability and performance.

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-17 11:28:37 -08:00
silverwind
f5a81f9636
Run spellcheck on tools directory (#33627)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
Add `tools` files to spellcheck and fixed one issue.
2025-02-17 18:39:12 +01:00
wxiaoguang
f35850f48e
Refactor error system (#33610)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
2025-02-16 22:13:17 -08:00
Lunny Xiao
69de5a65c2
Fix project issues list and counting (#33594)
Co-authored-by: delvh <dev.lh@web.de>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-17 05:14:56 +00:00
Lunny Xiao
5df9fd3e9c
Add API to support link package to repository and unlink it (#33481)
Fix #21062

---------

Co-authored-by: Zettat123 <zettat123@gmail.com>
2025-02-16 19:18:00 -08:00
GiteaBot
50a5d6bf5d [skip ci] Updated translations via Crowdin 2025-02-17 00:33:47 +00:00
silverwind
3bbacac62c
Update JS and PY dependencies (#33587)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
- Update all dependencies excluding `tailwindcss` and `idiomorph`
- Tested citation, asciinema, pdf, swagger

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-16 14:14:23 +01:00
Sandro Santilli
37c4f3760c
[chore] add git mailmap for proper attribution of authorship (#33612) 2025-02-16 20:49:28 +08:00
Lunny Xiao
58c124cc4f
Move commits signature and verify functions to service layers (#33605)
No logic change, just move functions.
2025-02-16 12:24:07 +00:00
Sveinn Thorarinsson
62389dd08b
add spacing between sign in button's icon and text (#33609)
This pull request edits the head_navbar template and adds spacing
between the icon and the text inside the sign in button of the navbar
(button which displays at the top right of Gitea's pages when the user
is not signed in).

It bugged me that there was no spacing between the button's contents so
I test ran this change quickly on my server and thought it looked a lot
better, so decided to make this pull request. Up to you to decide if you
agree that it looks better :)
2025-02-16 11:27:31 +00:00
Darren Hoo
950abfe8ee
enable literal string for code search (#33590)
Close: #33588

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Giteabot <teabot@gitea.io>
2025-02-16 11:28:06 +01:00
GiteaBot
fc1b383da9 [skip ci] Updated translations via Crowdin 2025-02-16 00:34:48 +00:00
ChristopherHX
2b8cfb557d
Artifacts download api for artifact actions v4 (#33510)
* download endpoint has to use 302 redirect
* fake blob download used if direct download not possible
* downloading v3 artifacts not possible

New repo apis based on GitHub Rest V3
- GET /runs/{run}/artifacts (Cannot use run index of url due to not
being unique)
- GET /artifacts
- GET + DELETE /artifacts/{artifact_id}
- GET /artifacts/{artifact_id}/zip
- (GET /artifacts/{artifact_id}/zip/raw this is a workaround for a http
302 assertion in actions/toolkit)
- api docs removed this is protected by a signed url like the internal
artifacts api and no longer usable with any token or swagger
  - returns http 401 if the signature is invalid
    - or change the artifact id
    - or expired after 1 hour

Closes #33353
Closes #32124

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-16 08:32:54 +08:00
Lunny Xiao
01bf8da02e
Fix bug when get commit (#33602)
Fix #33595
2025-02-15 15:16:19 -08:00
ericLemanissier
57997f1518
Fix mirror bug (#33597)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
follows-up be4e961240883778c44d9651eaaf9ab8723bbbb0

Fix https://github.com/go-gitea/gitea/issues/33200

---------

Co-authored-by: Giteabot <teabot@gitea.io>
2025-02-15 18:29:44 +08:00
silverwind
1ba7cbbfd6
Fix typo in HTML attribute (#33599)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
2025-02-14 10:59:37 -05:00
Zettat123
8aede14b1d
Use default Git timeout when checking repo health (#33593) 2025-02-14 15:13:56 +00:00
Lunny Xiao
70327d6a92
Improve commits list performance to reduce unnecessary database queries (#33528)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
When listing commits, Gitea attempts to retrieve the actual user based
on the commit email. Querying users one by one from the database is
inefficient. This PR optimizes the process by batch querying users by
email, reducing the number of database queries.
2025-02-14 00:05:55 -08:00
Lunny Xiao
f232d8f530
Performance optimization for pull request files loading comments attachments (#33585) 2025-02-14 06:49:58 +00:00
wxiaoguang
b426e383fe
Fix PR's target branch dropdown (#33589)
Fix #33586

It only moves `PrepareBranchList` and `retrieveAssigneesData` to before
the `CanModifyIssueOrPull` check, and adds more comments
2025-02-14 05:21:31 +00:00
techknowlogick
d88b012525
go1.24 (#33562)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
update to use go1.24

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-13 18:00:00 +08:00
Exploding Dragon
fba365b425
Only show the latest version in the Arch index (#33262)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
Only show the latest version of the package in the arch repo.

closes #33534

---------

Co-authored-by: Giteabot <teabot@gitea.io>
Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-13 01:24:44 +00:00
GiteaBot
42d817e814 [skip ci] Updated translations via Crowdin 2025-02-13 00:31:39 +00:00
silverwind
3e39583bb5
Enable eslint for commonjs (#33575)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
2025-02-12 22:47:54 +00:00
wxiaoguang
e741448a14
Fix various problems (artifact order, api empty slice, assignee check, fuzzy prompt, mirror proxy, adopt git) (#33569)
* Make artifact list output a stable order
* Fix #33506
* Fix #33521
* Fix #33288
* Fix #33196
* Fix #33561
2025-02-13 03:26:27 +08:00
silverwind
bcd1317d17
Switch to @vitest/eslint-plugin (#33573)
Package has been renamed and now also provides the globals so we can
replace two dependencies with one.

Ref: https://github.com/vitest-dev/eslint-plugin-vitest/issues/537
2025-02-12 11:08:34 -05:00
wxiaoguang
f58f5bb3d8
Avoid duplicate SetContextValue call (#33564)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
And fix FIXME and TODO
2025-02-12 14:25:46 +08:00
Zettat123
06f1065636
Add a transaction to pickTask (#33543)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
In the old `pickTask`, when getting secrets or variables failed, the
task could get stuck in the `running` status (task status is `running`
but the runner did not fetch the task). To fix this issue, these steps
should be in one transaction.

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-12 00:09:43 +08:00
wxiaoguang
245ac321c3
Fix context usage (#33554)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
Some old code use direct type-casting to get context, it causes
problems.

This PR fixes all legacy problems and use correct `ctx.Value` to get
low-level contexts.

Fix #33518
2025-02-11 16:46:03 +08:00
Jason Song
e9b98aef44
Enhance routers for the Actions runner operations (#33549)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
- Find the runner before deleting
- Move the main logic from `routers/web/repo/setting/runners.go` to
`routers/web/shared/actions/runners.go`.

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-11 01:39:10 +00:00
GiteaBot
217ffe5492 [skip ci] Updated translations via Crowdin 2025-02-11 00:32:23 +00:00
silverwind
b3302748fa
Run yamllint with strict mode, fix issue (#33551)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
Previously yamllint would issue warnings for certain things, while still
exiting with zero. Now warnings are treated like errors and will cause
non-zero exit:

```
  -s, --strict          return non-zero exit code on warnings as well as errors
```
2025-02-10 22:33:40 +00:00
Jason Song
c422f179dd
Enhance routers for the Actions variable operations (#33547)
- Find the variable before updating or deleting
- Move the main logic from `routers/web/repo/setting/variables.go` to
`routers/web/shared/actions/variables.go`.

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Giteabot <teabot@gitea.io>
2025-02-11 04:44:04 +08:00
jason19970210
e3adb686bb
enhancement: add additional command hints for PowerShell & CMD (#33548)
- resolving wrong signature calculations for SSH key verification

Fixed #22693

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
Co-authored-by: Giteabot <teabot@gitea.io>
2025-02-11 04:14:37 +08:00
wxiaoguang
30993e9508
Feature: Support workflow event dispatch via API (#33545)
Fix: https://github.com/go-gitea/gitea/issues/31765 (Re-open #32059)

---------

Co-authored-by: Bence Santha <git@santha.eu>
Co-authored-by: Bence Sántha <7604637+bencurio@users.noreply.github.com>
Co-authored-by: Christopher Homberger <christopher.homberger@web.de>
2025-02-11 03:05:42 +08:00
Kerwin Bryant
085f273d19
Optimize the dashboard (#32990)
before:

![image](https://github.com/user-attachments/assets/d0b432e4-a521-4540-a489-d18b9c265674)

after:

![image](https://github.com/user-attachments/assets/dbb8b387-d150-41e2-b12b-f9d8450e36d7)
-----

![image](https://github.com/user-attachments/assets/40dcd71e-344b-4043-9811-77227c71aed9)
-----

Optimize the dashboard by adding welcoming messages or quick action
entry points (such as adding a new repository or organization) to ensure
that new users are not greeted by a blank page upon logging in.

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-11 01:24:34 +08:00
Lunny Xiao
72518a8dab
Rework suggestion backend (#33538)
Fix #33522 

The suggestion backend logic now is

- If the keyword is empty, returned the latest 5 issues/prs with index
desc order
- If the keyword is digital, find all issues/prs which `index` has a
prefix with that, with index asc order
- If the keyword is non-digital or if the queried records less than 5,
searching issues/prs title with a `like`, with index desc order

## Empty keyword
<img width="310" alt="image"
src="https://github.com/user-attachments/assets/1912c634-0d98-4eeb-8542-d54240901f77"
/>

## Digital
<img width="479" alt="image"
src="https://github.com/user-attachments/assets/0356a936-7110-4a24-b21e-7400201bf9b8"
/>

## Digital and title contains the digital
<img width="363" alt="image"
src="https://github.com/user-attachments/assets/6e12f908-28fe-48de-8ccc-09cbeab024d4"
/>

## non-Digital
<img width="435" alt="image"
src="https://github.com/user-attachments/assets/2722bb53-baa2-4d67-a224-522a65f73856"
/>
<img width="477" alt="image"
src="https://github.com/user-attachments/assets/06708dd9-80d1-4a88-b32b-d29072dd1ba6"
/>

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-10 16:24:05 +00:00
wxiaoguang
704b65e012
Revert "Feature: Support workflow event dispatch via API (#32059)" (#33541)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
This reverts commit 523751dc82bbb9d3f8d413f232e23ab0476eb4d4.
2025-02-10 17:44:42 +08:00
Bence Sántha
523751dc82
Feature: Support workflow event dispatch via API (#32059)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
ref: https://github.com/go-gitea/gitea/issues/31765

---------

Signed-off-by: Bence Santha <git@santha.eu>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Christopher Homberger <christopher.homberger@web.de>
2025-02-10 05:23:57 +08:00
wxiaoguang
06088ec672
Remove "class-name" from svg icon (#33540)
Only use "class" attribute
2025-02-09 22:39:54 +02:00
Kerwin Bryant
a52720b5b4
Add "No data available" display when list is empty (#33517)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
Add a "No data available" message to be displayed when the list has no
data. This improves the user experience by providing clear feedback in
an empty state.

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-09 00:13:41 +08:00
mscherer
063c23e1bc
Add a option "--user-type bot" to admin user create, improve role display (#27885)
Some checks failed
release-nightly / nightly-binary (push) Has been cancelled
release-nightly / nightly-docker-rootful (push) Has been cancelled
release-nightly / nightly-docker-rootless (push) Has been cancelled
Partially solve #13044

Fix #33295

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-07 08:41:55 +00:00
TheFox0x7
1ec8d80fa3
refactor: decouple context from migration structs (#33399)
Use context as much as possible.

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-07 05:37:32 +00:00
Lunny Xiao
466cc725bc
Move gitgraph from modules to services layer (#33527)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
Just move, no code change.
2025-02-07 03:05:25 +00:00
Alexander McRae
a1f1bccd7a
Add go wrapper around git diff-tree --raw -r -M (#33369)
* Implemented calling git diff-tree
 * Ensures wrapper function is called with valid arguments
 * Parses output into go struct, using strong typing when possible
2025-02-07 00:58:28 +00:00
GiteaBot
dbc18f400a [skip ci] Updated translations via Crowdin 2025-02-07 00:31:35 +00:00
ChristopherHX
0070ffe560
Update MAINTAINERS (#33529)
* Add myself to the maintainers file
2025-02-07 08:10:49 +08:00
Kerwin Bryant
40426addfa
Add cropping support when modifying the user/org/repo avatar (#33498)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
Fixed #33321

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-06 21:07:44 +08:00
GiteaBot
943cc4f989 [skip ci] Updated translations via Crowdin 2025-02-06 00:31:49 +00:00
John Smith
a025fa70ab
Add alphabetical project sorting (#33504)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
Fixes #33500
2025-02-05 19:09:43 +00:00
wxiaoguang
fa0c8ae50f
Refactor gitdiff test (#33507) 2025-02-05 16:09:58 +00:00
techknowlogick
7e596bd7a9
add timetzdata build tag to binary releases (#33463)
Some checks are pending
release-nightly / nightly-binary (push) Waiting to run
release-nightly / nightly-docker-rootful (push) Waiting to run
release-nightly / nightly-docker-rootless (push) Waiting to run
`timetzdata` is already used in the docker images, this includes them
for the binary release files too.

Related to #33235 (I don't have a windows machine setup to test this
though)

---------

Co-authored-by: wxiaoguang <wxiaoguang@gmail.com>
2025-02-05 04:17:08 +00:00
Lunny Xiao
6999651b6d
Fix unnecessary comment when moving issue on the same project column (#33496)
Fix #33482
2025-02-05 11:51:10 +08:00
420 changed files with 11631 additions and 7477 deletions

View File

@ -1,3 +1,4 @@
const vitestPlugin = require('@vitest/eslint-plugin');
const restrictedSyntax = ['WithStatement', 'ForInStatement', 'LabeledStatement', 'SequenceExpression'];
module.exports = {
@ -37,8 +38,6 @@ module.exports = {
'eslint-plugin-regexp',
'eslint-plugin-sonarjs',
'eslint-plugin-unicorn',
'eslint-plugin-vitest',
'eslint-plugin-vitest-globals',
'eslint-plugin-wc',
],
env: {
@ -46,6 +45,13 @@ module.exports = {
node: true,
},
overrides: [
{
files: ['**/*.cjs'],
rules: {
'import-x/no-commonjs': [0],
'@typescript-eslint/no-require-imports': [0],
},
},
{
files: ['web_src/**/*'],
globals: {
@ -82,59 +88,58 @@ module.exports = {
},
{
files: ['**/*.test.*', 'web_src/js/test/setup.ts'],
env: {
'vitest-globals/env': true,
},
plugins: ['@vitest/eslint-plugin'],
globals: vitestPlugin.environments.env.globals,
rules: {
'vitest/consistent-test-filename': [0],
'vitest/consistent-test-it': [0],
'vitest/expect-expect': [0],
'vitest/max-expects': [0],
'vitest/max-nested-describe': [0],
'vitest/no-alias-methods': [0],
'vitest/no-commented-out-tests': [0],
'vitest/no-conditional-expect': [0],
'vitest/no-conditional-in-test': [0],
'vitest/no-conditional-tests': [0],
'vitest/no-disabled-tests': [0],
'vitest/no-done-callback': [0],
'vitest/no-duplicate-hooks': [0],
'vitest/no-focused-tests': [0],
'vitest/no-hooks': [0],
'vitest/no-identical-title': [2],
'vitest/no-interpolation-in-snapshots': [0],
'vitest/no-large-snapshots': [0],
'vitest/no-mocks-import': [0],
'vitest/no-restricted-matchers': [0],
'vitest/no-restricted-vi-methods': [0],
'vitest/no-standalone-expect': [0],
'vitest/no-test-prefixes': [0],
'vitest/no-test-return-statement': [0],
'vitest/prefer-called-with': [0],
'vitest/prefer-comparison-matcher': [0],
'vitest/prefer-each': [0],
'vitest/prefer-equality-matcher': [0],
'vitest/prefer-expect-resolves': [0],
'vitest/prefer-hooks-in-order': [0],
'vitest/prefer-hooks-on-top': [2],
'vitest/prefer-lowercase-title': [0],
'vitest/prefer-mock-promise-shorthand': [0],
'vitest/prefer-snapshot-hint': [0],
'vitest/prefer-spy-on': [0],
'vitest/prefer-strict-equal': [0],
'vitest/prefer-to-be': [0],
'vitest/prefer-to-be-falsy': [0],
'vitest/prefer-to-be-object': [0],
'vitest/prefer-to-be-truthy': [0],
'vitest/prefer-to-contain': [0],
'vitest/prefer-to-have-length': [0],
'vitest/prefer-todo': [0],
'vitest/require-hook': [0],
'vitest/require-to-throw-message': [0],
'vitest/require-top-level-describe': [0],
'vitest/valid-describe-callback': [2],
'vitest/valid-expect': [2],
'vitest/valid-title': [2],
'@vitest/consistent-test-filename': [0],
'@vitest/consistent-test-it': [0],
'@vitest/expect-expect': [0],
'@vitest/max-expects': [0],
'@vitest/max-nested-describe': [0],
'@vitest/no-alias-methods': [0],
'@vitest/no-commented-out-tests': [0],
'@vitest/no-conditional-expect': [0],
'@vitest/no-conditional-in-test': [0],
'@vitest/no-conditional-tests': [0],
'@vitest/no-disabled-tests': [0],
'@vitest/no-done-callback': [0],
'@vitest/no-duplicate-hooks': [0],
'@vitest/no-focused-tests': [0],
'@vitest/no-hooks': [0],
'@vitest/no-identical-title': [2],
'@vitest/no-interpolation-in-snapshots': [0],
'@vitest/no-large-snapshots': [0],
'@vitest/no-mocks-import': [0],
'@vitest/no-restricted-matchers': [0],
'@vitest/no-restricted-vi-methods': [0],
'@vitest/no-standalone-expect': [0],
'@vitest/no-test-prefixes': [0],
'@vitest/no-test-return-statement': [0],
'@vitest/prefer-called-with': [0],
'@vitest/prefer-comparison-matcher': [0],
'@vitest/prefer-each': [0],
'@vitest/prefer-equality-matcher': [0],
'@vitest/prefer-expect-resolves': [0],
'@vitest/prefer-hooks-in-order': [0],
'@vitest/prefer-hooks-on-top': [2],
'@vitest/prefer-lowercase-title': [0],
'@vitest/prefer-mock-promise-shorthand': [0],
'@vitest/prefer-snapshot-hint': [0],
'@vitest/prefer-spy-on': [0],
'@vitest/prefer-strict-equal': [0],
'@vitest/prefer-to-be': [0],
'@vitest/prefer-to-be-falsy': [0],
'@vitest/prefer-to-be-object': [0],
'@vitest/prefer-to-be-truthy': [0],
'@vitest/prefer-to-contain': [0],
'@vitest/prefer-to-have-length': [0],
'@vitest/prefer-todo': [0],
'@vitest/require-hook': [0],
'@vitest/require-to-throw-message': [0],
'@vitest/require-top-level-describe': [0],
'@vitest/valid-describe-callback': [2],
'@vitest/valid-expect': [2],
'@vitest/valid-title': [2],
},
},
{
@ -163,7 +168,7 @@ module.exports = {
{
files: ['tests/e2e/**'],
plugins: [
'eslint-plugin-playwright'
'eslint-plugin-playwright',
],
extends: [
'plugin:playwright/recommended',

View File

@ -1,8 +1,8 @@
name: cron-licenses
on:
#schedule:
# - cron: "7 0 * * 1" # every Monday at 00:07 UTC
# schedule:
# - cron: "7 0 * * 1" # every Monday at 00:07 UTC
workflow_dispatch:
jobs:

2
.mailmap Normal file
View File

@ -0,0 +1,2 @@
Unknwon <u@gogs.io> <joe2010xtmf@163.com>
Unknwon <u@gogs.io> 无闻 <u@gogs.io>

View File

@ -1,5 +1,5 @@
# Build stage
FROM docker.io/library/golang:1.23-alpine3.21 AS build-env
FROM docker.io/library/golang:1.24-alpine3.21 AS build-env
ARG GOPROXY
ENV GOPROXY=${GOPROXY:-direct}

View File

@ -1,5 +1,5 @@
# Build stage
FROM docker.io/library/golang:1.23-alpine3.21 AS build-env
FROM docker.io/library/golang:1.24-alpine3.21 AS build-env
ARG GOPROXY
ENV GOPROXY=${GOPROXY:-direct}

View File

@ -63,3 +63,4 @@ Kemal Zebari <kemalzebra@gmail.com> (@kemzeb)
Rowan Bohde <rowan.bohde@gmail.com> (@bohde)
hiifong <i@hiif.ong> (@hiifong)
metiftikci <metiftikci@hotmail.com> (@metiftikci)
Christopher Homberger <christopher.homberger@web.de> (@ChristopherHX)

View File

@ -23,7 +23,7 @@ SHASUM ?= shasum -a 256
HAS_GO := $(shell hash $(GO) > /dev/null 2>&1 && echo yes)
COMMA := ,
XGO_VERSION := go-1.23.x
XGO_VERSION := go-1.24.x
AIR_PACKAGE ?= github.com/air-verse/air@v1
EDITORCONFIG_CHECKER_PACKAGE ?= github.com/editorconfig-checker/editorconfig-checker/v3/cmd/editorconfig-checker@v3.1.2
@ -144,9 +144,9 @@ TAR_EXCLUDES := .git data indexers queues log node_modules $(EXECUTABLE) $(FOMAN
GO_DIRS := build cmd models modules routers services tests
WEB_DIRS := web_src/js web_src/css
ESLINT_FILES := web_src/js tools *.js *.ts tests/e2e
ESLINT_FILES := web_src/js tools *.js *.ts *.cjs tests/e2e
STYLELINT_FILES := web_src/css web_src/js/components/*.vue
SPELLCHECK_FILES := $(GO_DIRS) $(WEB_DIRS) templates options/locale/locale_en-US.ini .github $(filter-out CHANGELOG.md, $(wildcard *.go *.js *.md *.yml *.yaml *.toml))
SPELLCHECK_FILES := $(GO_DIRS) $(WEB_DIRS) templates options/locale/locale_en-US.ini .github $(filter-out CHANGELOG.md, $(wildcard *.go *.js *.md *.yml *.yaml *.toml)) $(filter-out tools/misspellings.csv, $(wildcard tools/*))
EDITORCONFIG_FILES := templates .github/workflows options/locale/locale_en-US.ini
GO_SOURCES := $(wildcard *.go)
@ -393,7 +393,7 @@ lint-templates: .venv node_modules ## lint template files
.PHONY: lint-yaml
lint-yaml: .venv ## lint yaml files
@poetry run yamllint .
@poetry run yamllint -s .
.PHONY: watch
watch: ## watch everything and continuously rebuild

View File

@ -150,10 +150,64 @@ for the full license text.
<details>
<summary>Looking for an overview of the interface? Check it out!</summary>
|![Dashboard](https://dl.gitea.com/screenshots/home_timeline.png)|![User Profile](https://dl.gitea.com/screenshots/user_profile.png)|![Global Issues](https://dl.gitea.com/screenshots/global_issues.png)|
|:---:|:---:|:---:|
|![Branches](https://dl.gitea.com/screenshots/branches.png)|![Web Editor](https://dl.gitea.com/screenshots/web_editor.png)|![Activity](https://dl.gitea.com/screenshots/activity.png)|
|![New Migration](https://dl.gitea.com/screenshots/migration.png)|![Migrating](https://dl.gitea.com/screenshots/migration.gif)|![Pull Request View](https://image.ibb.co/e02dSb/6.png)|
|![Pull Request Dark](https://dl.gitea.com/screenshots/pull_requests_dark.png)|![Diff Review Dark](https://dl.gitea.com/screenshots/review_dark.png)|![Diff Dark](https://dl.gitea.com/screenshots/diff_dark.png)|
### Login/Register Page
![Login](https://dl.gitea.com/screenshots/login.png)
![Register](https://dl.gitea.com/screenshots/register.png)
### User Dashboard
![Home](https://dl.gitea.com/screenshots/home.png)
![Issues](https://dl.gitea.com/screenshots/issues.png)
![Pull Requests](https://dl.gitea.com/screenshots/pull_requests.png)
![Milestones](https://dl.gitea.com/screenshots/milestones.png)
### User Profile
![Profile](https://dl.gitea.com/screenshots/user_profile.png)
### Explore
![Repos](https://dl.gitea.com/screenshots/explore_repos.png)
![Users](https://dl.gitea.com/screenshots/explore_users.png)
![Orgs](https://dl.gitea.com/screenshots/explore_orgs.png)
### Repository
![Home](https://dl.gitea.com/screenshots/repo_home.png)
![Commits](https://dl.gitea.com/screenshots/repo_commits.png)
![Branches](https://dl.gitea.com/screenshots/repo_branches.png)
![Labels](https://dl.gitea.com/screenshots/repo_labels.png)
![Milestones](https://dl.gitea.com/screenshots/repo_milestones.png)
![Releases](https://dl.gitea.com/screenshots/repo_releases.png)
![Tags](https://dl.gitea.com/screenshots/repo_tags.png)
#### Repository Issue
![List](https://dl.gitea.com/screenshots/repo_issues.png)
![Issue](https://dl.gitea.com/screenshots/repo_issue.png)
#### Repository Pull Requests
![List](https://dl.gitea.com/screenshots/repo_pull_requests.png)
![Pull Request](https://dl.gitea.com/screenshots/repo_pull_request.png)
![File](https://dl.gitea.com/screenshots/repo_pull_request_file.png)
![Commits](https://dl.gitea.com/screenshots/repo_pull_request_commits.png)
#### Repository Actions
![List](https://dl.gitea.com/screenshots/repo_actions.png)
![Details](https://dl.gitea.com/screenshots/repo_actions_run.png)
#### Repository Activity
![Activity](https://dl.gitea.com/screenshots/repo_activity.png)
![Contributors](https://dl.gitea.com/screenshots/repo_contributors.png)
![Code Frequency](https://dl.gitea.com/screenshots/repo_code_frequency.png)
![Recent Commits](https://dl.gitea.com/screenshots/repo_recent_commits.png)
### Organization
![Home](https://dl.gitea.com/screenshots/org_home.png)
</details>

View File

@ -93,10 +93,64 @@ Gitea 提供官方的 [go-sdk](https://gitea.com/gitea/go-sdk),以及名为 [t
<details>
<summary>截图</summary>
|![Dashboard](https://dl.gitea.com/screenshots/home_timeline.png)|![User Profile](https://dl.gitea.com/screenshots/user_profile.png)|![Global Issues](https://dl.gitea.com/screenshots/global_issues.png)|
|:---:|:---:|:---:|
|![Branches](https://dl.gitea.com/screenshots/branches.png)|![Web Editor](https://dl.gitea.com/screenshots/web_editor.png)|![Activity](https://dl.gitea.com/screenshots/activity.png)|
|![New Migration](https://dl.gitea.com/screenshots/migration.png)|![Migrating](https://dl.gitea.com/screenshots/migration.gif)|![Pull Request View](https://image.ibb.co/e02dSb/6.png)|
|![Pull Request Dark](https://dl.gitea.com/screenshots/pull_requests_dark.png)|![Diff Review Dark](https://dl.gitea.com/screenshots/review_dark.png)|![Diff Dark](https://dl.gitea.com/screenshots/diff_dark.png)|
### 登录界面
![登录](https://dl.gitea.com/screenshots/login.png)
![注册](https://dl.gitea.com/screenshots/register.png)
### 用户首页
![首页](https://dl.gitea.com/screenshots/home.png)
![工单列表](https://dl.gitea.com/screenshots/issues.png)
![合并请求列表](https://dl.gitea.com/screenshots/pull_requests.png)
![里程碑列表](https://dl.gitea.com/screenshots/milestones.png)
### 用户资料
![用户资料](https://dl.gitea.com/screenshots/user_profile.png)
### 探索
![仓库列表](https://dl.gitea.com/screenshots/explore_repos.png)
![用户列表](https://dl.gitea.com/screenshots/explore_users.png)
![组织列表](https://dl.gitea.com/screenshots/explore_orgs.png)
### 仓库
![首页](https://dl.gitea.com/screenshots/repo_home.png)
![提交列表](https://dl.gitea.com/screenshots/repo_commits.png)
![分支列表](https://dl.gitea.com/screenshots/repo_branches.png)
![标签列表](https://dl.gitea.com/screenshots/repo_labels.png)
![里程碑列表](https://dl.gitea.com/screenshots/repo_milestones.png)
![版本发布](https://dl.gitea.com/screenshots/repo_releases.png)
![标签列表](https://dl.gitea.com/screenshots/repo_tags.png)
#### 仓库工单
![列表](https://dl.gitea.com/screenshots/repo_issues.png)
![工单](https://dl.gitea.com/screenshots/repo_issue.png)
#### 仓库合并请求
![列表](https://dl.gitea.com/screenshots/repo_pull_requests.png)
![合并请求](https://dl.gitea.com/screenshots/repo_pull_request.png)
![文件](https://dl.gitea.com/screenshots/repo_pull_request_file.png)
![提交列表](https://dl.gitea.com/screenshots/repo_pull_request_commits.png)
#### 仓库 Actions
![列表](https://dl.gitea.com/screenshots/repo_actions.png)
![Run](https://dl.gitea.com/screenshots/repo_actions_run.png)
#### 仓库动态
![动态](https://dl.gitea.com/screenshots/repo_activity.png)
![贡献者](https://dl.gitea.com/screenshots/repo_contributors.png)
![代码频率](https://dl.gitea.com/screenshots/repo_code_frequency.png)
![最近的提交](https://dl.gitea.com/screenshots/repo_recent_commits.png)
### 组织
![首页](https://dl.gitea.com/screenshots/org_home.png)
</details>

View File

@ -31,6 +31,11 @@ var microcmdUserCreate = &cli.Command{
Name: "username",
Usage: "Username",
},
&cli.StringFlag{
Name: "user-type",
Usage: "Set user's type: individual or bot",
Value: "individual",
},
&cli.StringFlag{
Name: "password",
Usage: "User password",
@ -77,6 +82,22 @@ func runCreateUser(c *cli.Context) error {
return err
}
userTypes := map[string]user_model.UserType{
"individual": user_model.UserTypeIndividual,
"bot": user_model.UserTypeBot,
}
userType, ok := userTypes[c.String("user-type")]
if !ok {
return fmt.Errorf("invalid user type: %s", c.String("user-type"))
}
if userType != user_model.UserTypeIndividual {
// Some other commands like "change-password" also only support individual users.
// It needs to clarify the "password" behavior for bot users in the future.
// At the moment, we do not allow setting password for bot users.
if c.IsSet("password") || c.IsSet("random-password") {
return errors.New("password can only be set for individual users")
}
}
if c.IsSet("name") && c.IsSet("username") {
return errors.New("cannot set both --name and --username flags")
}
@ -118,16 +139,19 @@ func runCreateUser(c *cli.Context) error {
return err
}
fmt.Printf("generated random password is '%s'\n", password)
} else {
} else if userType == user_model.UserTypeIndividual {
return errors.New("must set either password or random-password flag")
}
isAdmin := c.Bool("admin")
mustChangePassword := true // always default to true
if c.IsSet("must-change-password") {
if userType != user_model.UserTypeIndividual {
return errors.New("must-change-password flag can only be set for individual users")
}
// if the flag is set, use the value provided by the user
mustChangePassword = c.Bool("must-change-password")
} else {
} else if userType == user_model.UserTypeIndividual {
// check whether there are users in the database
hasUserRecord, err := db.IsTableNotEmpty(&user_model.User{})
if err != nil {
@ -151,8 +175,9 @@ func runCreateUser(c *cli.Context) error {
u := &user_model.User{
Name: username,
Email: c.String("email"),
Passwd: password,
IsAdmin: isAdmin,
Type: userType,
Passwd: password,
MustChangePassword: mustChangePassword,
Visibility: visibility,
}

View File

@ -13,32 +13,54 @@ import (
user_model "code.gitea.io/gitea/models/user"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
)
func TestAdminUserCreate(t *testing.T) {
app := NewMainApp(AppVersion{})
reset := func() {
assert.NoError(t, db.TruncateBeans(db.DefaultContext, &user_model.User{}))
assert.NoError(t, db.TruncateBeans(db.DefaultContext, &user_model.EmailAddress{}))
require.NoError(t, db.TruncateBeans(db.DefaultContext, &user_model.User{}))
require.NoError(t, db.TruncateBeans(db.DefaultContext, &user_model.EmailAddress{}))
}
type createCheck struct{ IsAdmin, MustChangePassword bool }
createUser := func(name, args string) createCheck {
assert.NoError(t, app.Run(strings.Fields(fmt.Sprintf("./gitea admin user create --username %s --email %s@gitea.local %s --password foobar", name, name, args))))
u := unittest.AssertExistsAndLoadBean(t, &user_model.User{LowerName: name})
return createCheck{u.IsAdmin, u.MustChangePassword}
}
reset()
assert.Equal(t, createCheck{IsAdmin: false, MustChangePassword: false}, createUser("u", ""), "first non-admin user doesn't need to change password")
t.Run("MustChangePassword", func(t *testing.T) {
type check struct {
IsAdmin bool
MustChangePassword bool
}
createCheck := func(name, args string) check {
require.NoError(t, app.Run(strings.Fields(fmt.Sprintf("./gitea admin user create --username %s --email %s@gitea.local %s --password foobar", name, name, args))))
u := unittest.AssertExistsAndLoadBean(t, &user_model.User{LowerName: name})
return check{IsAdmin: u.IsAdmin, MustChangePassword: u.MustChangePassword}
}
reset()
assert.Equal(t, check{IsAdmin: false, MustChangePassword: false}, createCheck("u", ""), "first non-admin user doesn't need to change password")
reset()
assert.Equal(t, createCheck{IsAdmin: true, MustChangePassword: false}, createUser("u", "--admin"), "first admin user doesn't need to change password")
reset()
assert.Equal(t, check{IsAdmin: true, MustChangePassword: false}, createCheck("u", "--admin"), "first admin user doesn't need to change password")
reset()
assert.Equal(t, createCheck{IsAdmin: true, MustChangePassword: true}, createUser("u", "--admin --must-change-password"))
assert.Equal(t, createCheck{IsAdmin: true, MustChangePassword: true}, createUser("u2", "--admin"))
assert.Equal(t, createCheck{IsAdmin: true, MustChangePassword: false}, createUser("u3", "--admin --must-change-password=false"))
assert.Equal(t, createCheck{IsAdmin: false, MustChangePassword: true}, createUser("u4", ""))
assert.Equal(t, createCheck{IsAdmin: false, MustChangePassword: false}, createUser("u5", "--must-change-password=false"))
reset()
assert.Equal(t, check{IsAdmin: true, MustChangePassword: true}, createCheck("u", "--admin --must-change-password"))
assert.Equal(t, check{IsAdmin: true, MustChangePassword: true}, createCheck("u2", "--admin"))
assert.Equal(t, check{IsAdmin: true, MustChangePassword: false}, createCheck("u3", "--admin --must-change-password=false"))
assert.Equal(t, check{IsAdmin: false, MustChangePassword: true}, createCheck("u4", ""))
assert.Equal(t, check{IsAdmin: false, MustChangePassword: false}, createCheck("u5", "--must-change-password=false"))
})
t.Run("UserType", func(t *testing.T) {
createUser := func(name, args string) error {
return app.Run(strings.Fields(fmt.Sprintf("./gitea admin user create --username %s --email %s@gitea.local %s", name, name, args)))
}
reset()
assert.ErrorContains(t, createUser("u", "--user-type invalid"), "invalid user type")
assert.ErrorContains(t, createUser("u", "--user-type bot --password 123"), "can only be set for individual users")
assert.ErrorContains(t, createUser("u", "--user-type bot --must-change-password"), "can only be set for individual users")
assert.NoError(t, createUser("u", "--user-type bot"))
u := unittest.AssertExistsAndLoadBean(t, &user_model.User{LowerName: "u"})
assert.Equal(t, user_model.UserTypeBot, u.Type)
assert.Equal(t, "", u.Passwd)
})
}

View File

@ -196,7 +196,7 @@ func migrateActionsLog(ctx context.Context, dstStorage storage.ObjectStorage) er
func migrateActionsArtifacts(ctx context.Context, dstStorage storage.ObjectStorage) error {
return db.Iterate(ctx, nil, func(ctx context.Context, artifact *actions_model.ActionArtifact) error {
if artifact.Status == int64(actions_model.ArtifactStatusExpired) {
if artifact.Status == actions_model.ArtifactStatusExpired {
return nil
}

6
flake.lock generated
View File

@ -20,11 +20,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1736798957,
"narHash": "sha256-qwpCtZhSsSNQtK4xYGzMiyEDhkNzOCz/Vfu4oL2ETsQ=",
"lastModified": 1739214665,
"narHash": "sha256-26L8VAu3/1YRxS8MHgBOyOM8xALdo6N0I04PgorE7UM=",
"owner": "nixos",
"repo": "nixpkgs",
"rev": "9abb87b552b7f55ac8916b6fc9e5cb486656a2f3",
"rev": "64e75cd44acf21c7933d61d7721e812eac1b5a0a",
"type": "github"
},
"original": {

View File

@ -29,13 +29,13 @@
poetry
# backend
go_1_23
go_1_24
gofumpt
sqlite
];
shellHook = ''
export GO="${pkgs.go_1_23}/bin/go"
export GOROOT="${pkgs.go_1_23}/share/go"
export GO="${pkgs.go_1_24}/bin/go"
export GOROOT="${pkgs.go_1_24}/share/go"
'';
};
}

2
go.mod
View File

@ -1,6 +1,6 @@
module code.gitea.io/gitea
go 1.23
go 1.24
// rfc5280 said: "The serial number is an integer assigned by the CA to each certificate."
// But some CAs use negative serial number, just relax the check. related:

16
main_timezones.go Normal file
View File

@ -0,0 +1,16 @@
// Copyright 2025 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
//go:build windows
package main
// Golang has the ability to load OS's timezone data from most UNIX systems (https://github.com/golang/go/blob/master/src/time/zoneinfo_unix.go)
// Even if the timezone data is missing, users could install the related packages to get it.
// But on Windows, although `zoneinfo_windows.go` tries to load the timezone data from Windows registry,
// some users still suffer from the issue that the timezone data is missing: https://github.com/go-gitea/gitea/issues/33235
// So we import the tzdata package to make sure the timezone data is included in the binary.
//
// For non-Windows package builders, they could still use the "TAGS=timetzdata" to include the tzdata package in the binary.
// If we decided to add the tzdata for other platforms, modify the "go:build" directive above.
import _ "time/tzdata"

View File

@ -48,7 +48,7 @@ type ActionArtifact struct {
ContentEncoding string // The content encoding of the artifact
ArtifactPath string `xorm:"index unique(runid_name_path)"` // The path to the artifact when runner uploads it
ArtifactName string `xorm:"index unique(runid_name_path)"` // The name of the artifact when runner uploads it
Status int64 `xorm:"index"` // The status of the artifact, uploading, expired or need-delete
Status ArtifactStatus `xorm:"index"` // The status of the artifact, uploading, expired or need-delete
CreatedUnix timeutil.TimeStamp `xorm:"created"`
UpdatedUnix timeutil.TimeStamp `xorm:"updated index"`
ExpiredUnix timeutil.TimeStamp `xorm:"index"` // The time when the artifact will be expired
@ -68,7 +68,7 @@ func CreateArtifact(ctx context.Context, t *ActionTask, artifactName, artifactPa
RepoID: t.RepoID,
OwnerID: t.OwnerID,
CommitSHA: t.CommitSHA,
Status: int64(ArtifactStatusUploadPending),
Status: ArtifactStatusUploadPending,
ExpiredUnix: timeutil.TimeStamp(time.Now().Unix() + timeutil.Day*expiredDays),
}
if _, err := db.GetEngine(ctx).Insert(artifact); err != nil {
@ -108,12 +108,19 @@ func UpdateArtifactByID(ctx context.Context, id int64, art *ActionArtifact) erro
type FindArtifactsOptions struct {
db.ListOptions
RepoID int64
RunID int64
ArtifactName string
Status int
RepoID int64
RunID int64
ArtifactName string
Status int
FinalizedArtifactsV4 bool
}
func (opts FindArtifactsOptions) ToOrders() string {
return "id"
}
var _ db.FindOptionsOrder = (*FindArtifactsOptions)(nil)
func (opts FindArtifactsOptions) ToConds() builder.Cond {
cond := builder.NewCond()
if opts.RepoID > 0 {
@ -128,11 +135,15 @@ func (opts FindArtifactsOptions) ToConds() builder.Cond {
if opts.Status > 0 {
cond = cond.And(builder.Eq{"status": opts.Status})
}
if opts.FinalizedArtifactsV4 {
cond = cond.And(builder.Eq{"status": ArtifactStatusUploadConfirmed}.Or(builder.Eq{"status": ArtifactStatusExpired}))
cond = cond.And(builder.Eq{"content_encoding": "application/zip"})
}
return cond
}
// ActionArtifactMeta is the meta data of an artifact
// ActionArtifactMeta is the meta-data of an artifact
type ActionArtifactMeta struct {
ArtifactName string
FileSize int64
@ -166,18 +177,18 @@ func ListPendingDeleteArtifacts(ctx context.Context, limit int) ([]*ActionArtifa
// SetArtifactExpired sets an artifact to expired
func SetArtifactExpired(ctx context.Context, artifactID int64) error {
_, err := db.GetEngine(ctx).Where("id=? AND status = ?", artifactID, ArtifactStatusUploadConfirmed).Cols("status").Update(&ActionArtifact{Status: int64(ArtifactStatusExpired)})
_, err := db.GetEngine(ctx).Where("id=? AND status = ?", artifactID, ArtifactStatusUploadConfirmed).Cols("status").Update(&ActionArtifact{Status: ArtifactStatusExpired})
return err
}
// SetArtifactNeedDelete sets an artifact to need-delete, cron job will delete it
func SetArtifactNeedDelete(ctx context.Context, runID int64, name string) error {
_, err := db.GetEngine(ctx).Where("run_id=? AND artifact_name=? AND status = ?", runID, name, ArtifactStatusUploadConfirmed).Cols("status").Update(&ActionArtifact{Status: int64(ArtifactStatusPendingDeletion)})
_, err := db.GetEngine(ctx).Where("run_id=? AND artifact_name=? AND status = ?", runID, name, ArtifactStatusUploadConfirmed).Cols("status").Update(&ActionArtifact{Status: ArtifactStatusPendingDeletion})
return err
}
// SetArtifactDeleted sets an artifact to deleted
func SetArtifactDeleted(ctx context.Context, artifactID int64) error {
_, err := db.GetEngine(ctx).ID(artifactID).Cols("status").Update(&ActionArtifact{Status: int64(ArtifactStatusDeleted)})
_, err := db.GetEngine(ctx).ID(artifactID).Cols("status").Update(&ActionArtifact{Status: ArtifactStatusDeleted})
return err
}

View File

@ -10,6 +10,7 @@ import (
repo_model "code.gitea.io/gitea/models/repo"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/container"
"code.gitea.io/gitea/modules/translation"
webhook_module "code.gitea.io/gitea/modules/webhook"
"xorm.io/builder"
@ -112,14 +113,14 @@ type StatusInfo struct {
}
// GetStatusInfoList returns a slice of StatusInfo
func GetStatusInfoList(ctx context.Context) []StatusInfo {
func GetStatusInfoList(ctx context.Context, lang translation.Locale) []StatusInfo {
// same as those in aggregateJobStatus
allStatus := []Status{StatusSuccess, StatusFailure, StatusWaiting, StatusRunning}
statusInfoList := make([]StatusInfo, 0, 4)
for _, s := range allStatus {
statusInfoList = append(statusInfoList, StatusInfo{
Status: int(s),
DisplayedStatus: s.String(),
DisplayedStatus: s.LocaleString(lang),
})
}
return statusInfoList

View File

@ -167,6 +167,7 @@ func init() {
type FindRunnerOptions struct {
db.ListOptions
IDs []int64
RepoID int64
OwnerID int64 // it will be ignored if RepoID is set
Sort string
@ -178,6 +179,14 @@ type FindRunnerOptions struct {
func (opts FindRunnerOptions) ToConds() builder.Cond {
cond := builder.NewCond()
if len(opts.IDs) > 0 {
if len(opts.IDs) == 1 {
cond = cond.And(builder.Eq{"id": opts.IDs[0]})
} else {
cond = cond.And(builder.In("id", opts.IDs))
}
}
if opts.RepoID > 0 {
c := builder.NewCond().And(builder.Eq{"repo_id": opts.RepoID})
if opts.WithAvailable {

View File

@ -58,6 +58,7 @@ func InsertVariable(ctx context.Context, ownerID, repoID int64, name, data strin
type FindVariablesOpts struct {
db.ListOptions
IDs []int64
RepoID int64
OwnerID int64 // it will be ignored if RepoID is set
Name string
@ -65,6 +66,15 @@ type FindVariablesOpts struct {
func (opts FindVariablesOpts) ToConds() builder.Cond {
cond := builder.NewCond()
if len(opts.IDs) > 0 {
if len(opts.IDs) == 1 {
cond = cond.And(builder.Eq{"id": opts.IDs[0]})
} else {
cond = cond.And(builder.In("id", opts.IDs))
}
}
// Since we now support instance-level variables,
// there is no need to check for null values for `owner_id` and `repo_id`
cond = cond.And(builder.Eq{"repo_id": opts.RepoID})
@ -85,12 +95,12 @@ func FindVariables(ctx context.Context, opts FindVariablesOpts) ([]*ActionVariab
return db.Find[ActionVariable](ctx, opts)
}
func UpdateVariable(ctx context.Context, variable *ActionVariable) (bool, error) {
count, err := db.GetEngine(ctx).ID(variable.ID).Cols("name", "data").
Update(&ActionVariable{
Name: variable.Name,
Data: variable.Data,
})
func UpdateVariableCols(ctx context.Context, variable *ActionVariable, cols ...string) (bool, error) {
variable.Name = strings.ToUpper(variable.Name)
count, err := db.GetEngine(ctx).
ID(variable.ID).
Cols(cols...).
Update(variable)
return count != 0, err
}

View File

@ -106,7 +106,7 @@ func GPGKeyToEntity(ctx context.Context, k *GPGKey) (*openpgp.Entity, error) {
if err != nil {
return nil, err
}
keys, err := checkArmoredGPGKeyString(impKey.Content)
keys, err := CheckArmoredGPGKeyString(impKey.Content)
if err != nil {
return nil, err
}
@ -115,7 +115,7 @@ func GPGKeyToEntity(ctx context.Context, k *GPGKey) (*openpgp.Entity, error) {
// parseSubGPGKey parse a sub Key
func parseSubGPGKey(ownerID int64, primaryID string, pubkey *packet.PublicKey, expiry time.Time) (*GPGKey, error) {
content, err := base64EncPubKey(pubkey)
content, err := Base64EncPubKey(pubkey)
if err != nil {
return nil, err
}
@ -183,7 +183,7 @@ func parseGPGKey(ctx context.Context, ownerID int64, e *openpgp.Entity, verified
}
}
content, err := base64EncPubKey(pubkey)
content, err := Base64EncPubKey(pubkey)
if err != nil {
return nil, err
}
@ -239,33 +239,3 @@ func DeleteGPGKey(ctx context.Context, doer *user_model.User, id int64) (err err
return committer.Commit()
}
func checkKeyEmails(ctx context.Context, email string, keys ...*GPGKey) (bool, string) {
uid := int64(0)
var userEmails []*user_model.EmailAddress
var user *user_model.User
for _, key := range keys {
for _, e := range key.Emails {
if e.IsActivated && (email == "" || strings.EqualFold(e.Email, email)) {
return true, e.Email
}
}
if key.Verified && key.OwnerID != 0 {
if uid != key.OwnerID {
userEmails, _ = user_model.GetEmailAddresses(ctx, key.OwnerID)
uid = key.OwnerID
user = &user_model.User{ID: uid}
_, _ = user_model.GetUser(ctx, user)
}
for _, e := range userEmails {
if e.IsActivated && (email == "" || strings.EqualFold(e.Email, email)) {
return true, e.Email
}
}
if user.KeepEmailPrivate && strings.EqualFold(email, user.GetEmail()) {
return true, user.GetEmail()
}
}
}
return false, email
}

View File

@ -67,7 +67,7 @@ func addGPGSubKey(ctx context.Context, key *GPGKey) (err error) {
// AddGPGKey adds new public key to database.
func AddGPGKey(ctx context.Context, ownerID int64, content, token, signature string) ([]*GPGKey, error) {
ekeys, err := checkArmoredGPGKeyString(content)
ekeys, err := CheckArmoredGPGKeyString(content)
if err != nil {
return nil, err
}

View File

@ -4,17 +4,12 @@
package asymkey
import (
"context"
"fmt"
"hash"
"strings"
"code.gitea.io/gitea/models/db"
repo_model "code.gitea.io/gitea/models/repo"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/git"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/setting"
"github.com/ProtonMail/go-crypto/openpgp/packet"
)
@ -70,263 +65,6 @@ const (
NoKeyFound = "gpg.error.no_gpg_keys_found"
)
// ParseCommitsWithSignature checks if signaute of commits are corresponding to users gpg keys.
func ParseCommitsWithSignature(ctx context.Context, oldCommits []*user_model.UserCommit, repoTrustModel repo_model.TrustModelType, isOwnerMemberCollaborator func(*user_model.User) (bool, error)) []*SignCommit {
newCommits := make([]*SignCommit, 0, len(oldCommits))
keyMap := map[string]bool{}
for _, c := range oldCommits {
signCommit := &SignCommit{
UserCommit: c,
Verification: ParseCommitWithSignature(ctx, c.Commit),
}
_ = CalculateTrustStatus(signCommit.Verification, repoTrustModel, isOwnerMemberCollaborator, &keyMap)
newCommits = append(newCommits, signCommit)
}
return newCommits
}
// ParseCommitWithSignature check if signature is good against keystore.
func ParseCommitWithSignature(ctx context.Context, c *git.Commit) *CommitVerification {
var committer *user_model.User
if c.Committer != nil {
var err error
// Find Committer account
committer, err = user_model.GetUserByEmail(ctx, c.Committer.Email) // This finds the user by primary email or activated email so commit will not be valid if email is not
if err != nil { // Skipping not user for committer
committer = &user_model.User{
Name: c.Committer.Name,
Email: c.Committer.Email,
}
// We can expect this to often be an ErrUserNotExist. in the case
// it is not, however, it is important to log it.
if !user_model.IsErrUserNotExist(err) {
log.Error("GetUserByEmail: %v", err)
return &CommitVerification{
CommittingUser: committer,
Verified: false,
Reason: "gpg.error.no_committer_account",
}
}
}
}
// If no signature just report the committer
if c.Signature == nil {
return &CommitVerification{
CommittingUser: committer,
Verified: false, // Default value
Reason: "gpg.error.not_signed_commit", // Default value
}
}
// If this a SSH signature handle it differently
if strings.HasPrefix(c.Signature.Signature, "-----BEGIN SSH SIGNATURE-----") {
return ParseCommitWithSSHSignature(ctx, c, committer)
}
// Parsing signature
sig, err := extractSignature(c.Signature.Signature)
if err != nil { // Skipping failed to extract sign
log.Error("SignatureRead err: %v", err)
return &CommitVerification{
CommittingUser: committer,
Verified: false,
Reason: "gpg.error.extract_sign",
}
}
keyID := tryGetKeyIDFromSignature(sig)
defaultReason := NoKeyFound
// First check if the sig has a keyID and if so just look at that
if commitVerification := hashAndVerifyForKeyID(
ctx,
sig,
c.Signature.Payload,
committer,
keyID,
setting.AppName,
""); commitVerification != nil {
if commitVerification.Reason == BadSignature {
defaultReason = BadSignature
} else {
return commitVerification
}
}
// Now try to associate the signature with the committer, if present
if committer.ID != 0 {
keys, err := db.Find[GPGKey](ctx, FindGPGKeyOptions{
OwnerID: committer.ID,
})
if err != nil { // Skipping failed to get gpg keys of user
log.Error("ListGPGKeys: %v", err)
return &CommitVerification{
CommittingUser: committer,
Verified: false,
Reason: "gpg.error.failed_retrieval_gpg_keys",
}
}
if err := GPGKeyList(keys).LoadSubKeys(ctx); err != nil {
log.Error("LoadSubKeys: %v", err)
return &CommitVerification{
CommittingUser: committer,
Verified: false,
Reason: "gpg.error.failed_retrieval_gpg_keys",
}
}
committerEmailAddresses, _ := user_model.GetEmailAddresses(ctx, committer.ID)
activated := false
for _, e := range committerEmailAddresses {
if e.IsActivated && strings.EqualFold(e.Email, c.Committer.Email) {
activated = true
break
}
}
for _, k := range keys {
// Pre-check (& optimization) that emails attached to key can be attached to the committer email and can validate
canValidate := false
email := ""
if k.Verified && activated {
canValidate = true
email = c.Committer.Email
}
if !canValidate {
for _, e := range k.Emails {
if e.IsActivated && strings.EqualFold(e.Email, c.Committer.Email) {
canValidate = true
email = e.Email
break
}
}
}
if !canValidate {
continue // Skip this key
}
commitVerification := hashAndVerifyWithSubKeysCommitVerification(sig, c.Signature.Payload, k, committer, committer, email)
if commitVerification != nil {
return commitVerification
}
}
}
if setting.Repository.Signing.SigningKey != "" && setting.Repository.Signing.SigningKey != "default" && setting.Repository.Signing.SigningKey != "none" {
// OK we should try the default key
gpgSettings := git.GPGSettings{
Sign: true,
KeyID: setting.Repository.Signing.SigningKey,
Name: setting.Repository.Signing.SigningName,
Email: setting.Repository.Signing.SigningEmail,
}
if err := gpgSettings.LoadPublicKeyContent(); err != nil {
log.Error("Error getting default signing key: %s %v", gpgSettings.KeyID, err)
} else if commitVerification := verifyWithGPGSettings(ctx, &gpgSettings, sig, c.Signature.Payload, committer, keyID); commitVerification != nil {
if commitVerification.Reason == BadSignature {
defaultReason = BadSignature
} else {
return commitVerification
}
}
}
defaultGPGSettings, err := c.GetRepositoryDefaultPublicGPGKey(false)
if err != nil {
log.Error("Error getting default public gpg key: %v", err)
} else if defaultGPGSettings == nil {
log.Warn("Unable to get defaultGPGSettings for unattached commit: %s", c.ID.String())
} else if defaultGPGSettings.Sign {
if commitVerification := verifyWithGPGSettings(ctx, defaultGPGSettings, sig, c.Signature.Payload, committer, keyID); commitVerification != nil {
if commitVerification.Reason == BadSignature {
defaultReason = BadSignature
} else {
return commitVerification
}
}
}
return &CommitVerification{ // Default at this stage
CommittingUser: committer,
Verified: false,
Warning: defaultReason != NoKeyFound,
Reason: defaultReason,
SigningKey: &GPGKey{
KeyID: keyID,
},
}
}
func verifyWithGPGSettings(ctx context.Context, gpgSettings *git.GPGSettings, sig *packet.Signature, payload string, committer *user_model.User, keyID string) *CommitVerification {
// First try to find the key in the db
if commitVerification := hashAndVerifyForKeyID(ctx, sig, payload, committer, gpgSettings.KeyID, gpgSettings.Name, gpgSettings.Email); commitVerification != nil {
return commitVerification
}
// Otherwise we have to parse the key
ekeys, err := checkArmoredGPGKeyString(gpgSettings.PublicKeyContent)
if err != nil {
log.Error("Unable to get default signing key: %v", err)
return &CommitVerification{
CommittingUser: committer,
Verified: false,
Reason: "gpg.error.generate_hash",
}
}
for _, ekey := range ekeys {
pubkey := ekey.PrimaryKey
content, err := base64EncPubKey(pubkey)
if err != nil {
return &CommitVerification{
CommittingUser: committer,
Verified: false,
Reason: "gpg.error.generate_hash",
}
}
k := &GPGKey{
Content: content,
CanSign: pubkey.CanSign(),
KeyID: pubkey.KeyIdString(),
}
for _, subKey := range ekey.Subkeys {
content, err := base64EncPubKey(subKey.PublicKey)
if err != nil {
return &CommitVerification{
CommittingUser: committer,
Verified: false,
Reason: "gpg.error.generate_hash",
}
}
k.SubsKey = append(k.SubsKey, &GPGKey{
Content: content,
CanSign: subKey.PublicKey.CanSign(),
KeyID: subKey.PublicKey.KeyIdString(),
})
}
if commitVerification := hashAndVerifyWithSubKeysCommitVerification(sig, payload, k, committer, &user_model.User{
Name: gpgSettings.Name,
Email: gpgSettings.Email,
}, gpgSettings.Email); commitVerification != nil {
return commitVerification
}
if keyID == k.KeyID {
// This is a bad situation ... We have a key id that matches our default key but the signature doesn't match.
return &CommitVerification{
CommittingUser: committer,
Verified: false,
Warning: true,
Reason: BadSignature,
}
}
}
return nil
}
func verifySign(s *packet.Signature, h hash.Hash, k *GPGKey) error {
// Check if key can sign
if !k.CanSign {
@ -369,7 +107,7 @@ func hashAndVerifyWithSubKeys(sig *packet.Signature, payload string, k *GPGKey)
return nil, nil
}
func hashAndVerifyWithSubKeysCommitVerification(sig *packet.Signature, payload string, k *GPGKey, committer, signer *user_model.User, email string) *CommitVerification {
func HashAndVerifyWithSubKeysCommitVerification(sig *packet.Signature, payload string, k *GPGKey, committer, signer *user_model.User, email string) *CommitVerification {
key, err := hashAndVerifyWithSubKeys(sig, payload, k)
if err != nil { // Skipping failed to generate hash
return &CommitVerification{
@ -392,78 +130,6 @@ func hashAndVerifyWithSubKeysCommitVerification(sig *packet.Signature, payload s
return nil
}
func hashAndVerifyForKeyID(ctx context.Context, sig *packet.Signature, payload string, committer *user_model.User, keyID, name, email string) *CommitVerification {
if keyID == "" {
return nil
}
keys, err := db.Find[GPGKey](ctx, FindGPGKeyOptions{
KeyID: keyID,
IncludeSubKeys: true,
})
if err != nil {
log.Error("GetGPGKeysByKeyID: %v", err)
return &CommitVerification{
CommittingUser: committer,
Verified: false,
Reason: "gpg.error.failed_retrieval_gpg_keys",
}
}
if len(keys) == 0 {
return nil
}
for _, key := range keys {
var primaryKeys []*GPGKey
if key.PrimaryKeyID != "" {
primaryKeys, err = db.Find[GPGKey](ctx, FindGPGKeyOptions{
KeyID: key.PrimaryKeyID,
IncludeSubKeys: true,
})
if err != nil {
log.Error("GetGPGKeysByKeyID: %v", err)
return &CommitVerification{
CommittingUser: committer,
Verified: false,
Reason: "gpg.error.failed_retrieval_gpg_keys",
}
}
}
activated, email := checkKeyEmails(ctx, email, append([]*GPGKey{key}, primaryKeys...)...)
if !activated {
continue
}
signer := &user_model.User{
Name: name,
Email: email,
}
if key.OwnerID != 0 {
owner, err := user_model.GetUserByID(ctx, key.OwnerID)
if err == nil {
signer = owner
} else if !user_model.IsErrUserNotExist(err) {
log.Error("Failed to user_model.GetUserByID: %d for key ID: %d (%s) %v", key.OwnerID, key.ID, key.KeyID, err)
return &CommitVerification{
CommittingUser: committer,
Verified: false,
Reason: "gpg.error.no_committer_account",
}
}
}
commitVerification := hashAndVerifyWithSubKeysCommitVerification(sig, payload, key, committer, signer, email)
if commitVerification != nil {
return commitVerification
}
}
// This is a bad situation ... We have a key id that is in our database but the signature doesn't match.
return &CommitVerification{
CommittingUser: committer,
Verified: false,
Warning: true,
Reason: BadSignature,
}
}
// CalculateTrustStatus will calculate the TrustStatus for a commit verification within a repository
// There are several trust models in Gitea
func CalculateTrustStatus(verification *CommitVerification, repoTrustModel repo_model.TrustModelType, isOwnerMemberCollaborator func(*user_model.User) (bool, error), keyMap *map[string]bool) error {

View File

@ -33,9 +33,9 @@ import (
// This file provides common functions relating to GPG Keys
// checkArmoredGPGKeyString checks if the given key string is a valid GPG armored key.
// CheckArmoredGPGKeyString checks if the given key string is a valid GPG armored key.
// The function returns the actual public key on success
func checkArmoredGPGKeyString(content string) (openpgp.EntityList, error) {
func CheckArmoredGPGKeyString(content string) (openpgp.EntityList, error) {
list, err := openpgp.ReadArmoredKeyRing(strings.NewReader(content))
if err != nil {
return nil, ErrGPGKeyParsing{err}
@ -43,8 +43,8 @@ func checkArmoredGPGKeyString(content string) (openpgp.EntityList, error) {
return list, nil
}
// base64EncPubKey encode public key content to base 64
func base64EncPubKey(pubkey *packet.PublicKey) (string, error) {
// Base64EncPubKey encode public key content to base 64
func Base64EncPubKey(pubkey *packet.PublicKey) (string, error) {
var w bytes.Buffer
err := pubkey.Serialize(&w)
if err != nil {
@ -119,7 +119,7 @@ func readArmoredSign(r io.Reader) (body io.Reader, err error) {
return block.Body, nil
}
func extractSignature(s string) (*packet.Signature, error) {
func ExtractSignature(s string) (*packet.Signature, error) {
r, err := readArmoredSign(strings.NewReader(s))
if err != nil {
return nil, fmt.Errorf("Failed to read signature armor")
@ -135,7 +135,7 @@ func extractSignature(s string) (*packet.Signature, error) {
return sig, nil
}
func tryGetKeyIDFromSignature(sig *packet.Signature) string {
func TryGetKeyIDFromSignature(sig *packet.Signature) string {
if sig.IssuerKeyId != nil && (*sig.IssuerKeyId) != 0 {
return fmt.Sprintf("%016X", *sig.IssuerKeyId)
}

View File

@ -51,7 +51,7 @@ MkM/fdpyc2hY7Dl/+qFmN5MG5yGmMpQcX+RNNR222ibNC1D3wg==
=i9b7
-----END PGP PUBLIC KEY BLOCK-----`
key, err := checkArmoredGPGKeyString(testGPGArmor)
key, err := CheckArmoredGPGKeyString(testGPGArmor)
assert.NoError(t, err, "Could not parse a valid GPG public armored rsa key", key)
// TODO verify value of key
}
@ -72,7 +72,7 @@ OyjLLnFQiVmq7kEA/0z0CQe3ZQiQIq5zrs7Nh1XRkFAo8GlU/SGC9XFFi722
=ZiSe
-----END PGP PUBLIC KEY BLOCK-----`
key, err := checkArmoredGPGKeyString(testGPGArmor)
key, err := CheckArmoredGPGKeyString(testGPGArmor)
assert.NoError(t, err, "Could not parse a valid GPG public armored brainpoolP256r1 key", key)
// TODO verify value of key
}
@ -108,14 +108,14 @@ Av844q/BfRuVsJsK1NDNG09LC30B0l3LKBqlrRmRTUMHtgchdX2dY+p7GPOoSzlR
MkM/fdpyc2hY7Dl/+qFmN5MG5yGmMpQcX+RNNR222ibNC1D3wg==
=i9b7
-----END PGP PUBLIC KEY BLOCK-----`
keys, err := checkArmoredGPGKeyString(testGPGArmor)
keys, err := CheckArmoredGPGKeyString(testGPGArmor)
require.NotEmpty(t, keys)
ekey := keys[0]
assert.NoError(t, err, "Could not parse a valid GPG armored key", ekey)
pubkey := ekey.PrimaryKey
content, err := base64EncPubKey(pubkey)
content, err := Base64EncPubKey(pubkey)
assert.NoError(t, err, "Could not base64 encode a valid PublicKey content", ekey)
key := &GPGKey{
@ -176,9 +176,9 @@ committer Antoine GIRARD <sapk@sapk.fr> 1489013107 +0100
Unknown GPG key with good email
`
// Reading Sign
goodSig, err := extractSignature(testGoodSigArmor)
goodSig, err := ExtractSignature(testGoodSigArmor)
assert.NoError(t, err, "Could not parse a valid GPG armored signature", testGoodSigArmor)
badSig, err := extractSignature(testBadSigArmor)
badSig, err := ExtractSignature(testBadSigArmor)
assert.NoError(t, err, "Could not parse a valid GPG armored signature", testBadSigArmor)
// Generating hash of commit
@ -386,7 +386,7 @@ epiDVQ==
=VSKJ
-----END PGP PUBLIC KEY BLOCK-----
`
keys, err := checkArmoredGPGKeyString(testIssue6599)
keys, err := CheckArmoredGPGKeyString(testIssue6599)
assert.NoError(t, err)
if assert.NotEmpty(t, keys) {
ekey := keys[0]
@ -396,11 +396,11 @@ epiDVQ==
}
func TestTryGetKeyIDFromSignature(t *testing.T) {
assert.Empty(t, tryGetKeyIDFromSignature(&packet.Signature{}))
assert.Equal(t, "038D1A3EADDBEA9C", tryGetKeyIDFromSignature(&packet.Signature{
assert.Empty(t, TryGetKeyIDFromSignature(&packet.Signature{}))
assert.Equal(t, "038D1A3EADDBEA9C", TryGetKeyIDFromSignature(&packet.Signature{
IssuerKeyId: util.ToPointer(uint64(0x38D1A3EADDBEA9C)),
}))
assert.Equal(t, "038D1A3EADDBEA9C", tryGetKeyIDFromSignature(&packet.Signature{
assert.Equal(t, "038D1A3EADDBEA9C", TryGetKeyIDFromSignature(&packet.Signature{
IssuerFingerprint: []uint8{0xb, 0x23, 0x24, 0xc7, 0xe6, 0xfe, 0x4f, 0x3a, 0x6, 0x26, 0xc1, 0x21, 0x3, 0x8d, 0x1a, 0x3e, 0xad, 0xdb, 0xea, 0x9c},
}))
}

View File

@ -50,7 +50,7 @@ func VerifyGPGKey(ctx context.Context, ownerID int64, keyID, token, signature st
return "", err
}
sig, err := extractSignature(signature)
sig, err := ExtractSignature(signature)
if err != nil {
return "", ErrGPGInvalidTokenSignature{
ID: key.KeyID,

View File

@ -69,3 +69,21 @@
created_unix: 1730330775
updated_unix: 1730330775
expired_unix: 1738106775
-
id: 23
run_id: 793
runner_id: 1
repo_id: 2
owner_id: 2
commit_sha: c2d72f548424103f01ee1dc02889c1e2bff816b0
storage_path: "27/5/1730330775594233150.chunk"
file_size: 1024
file_compressed_size: 1024
content_encoding: "application/zip"
artifact_path: "artifact-v4-download.zip"
artifact_name: "artifact-v4-download"
status: 2
created_unix: 1730330775
updated_unix: 1730330775
expired_unix: 1738106775

View File

@ -0,0 +1,6 @@
-
id: 1
repo_id: 2
issue_id: 4
is_pull: false
pin_order: 1

View File

@ -496,47 +496,11 @@ type SignCommitWithStatuses struct {
*asymkey_model.SignCommit
}
// ParseCommitsWithStatus checks commits latest statuses and calculates its worst status state
func ParseCommitsWithStatus(ctx context.Context, oldCommits []*asymkey_model.SignCommit, repo *repo_model.Repository) []*SignCommitWithStatuses {
newCommits := make([]*SignCommitWithStatuses, 0, len(oldCommits))
for _, c := range oldCommits {
commit := &SignCommitWithStatuses{
SignCommit: c,
}
statuses, _, err := GetLatestCommitStatus(ctx, repo.ID, commit.ID.String(), db.ListOptions{})
if err != nil {
log.Error("GetLatestCommitStatus: %v", err)
} else {
commit.Statuses = statuses
commit.Status = CalcCommitStatus(statuses)
}
newCommits = append(newCommits, commit)
}
return newCommits
}
// hashCommitStatusContext hash context
func hashCommitStatusContext(context string) string {
return fmt.Sprintf("%x", sha1.Sum([]byte(context)))
}
// ConvertFromGitCommit converts git commits into SignCommitWithStatuses
func ConvertFromGitCommit(ctx context.Context, commits []*git.Commit, repo *repo_model.Repository) []*SignCommitWithStatuses {
return ParseCommitsWithStatus(ctx,
asymkey_model.ParseCommitsWithSignature(
ctx,
user_model.ValidateCommitsWithEmails(ctx, commits),
repo.GetTrustModel(),
func(user *user_model.User) (bool, error) {
return repo_model.IsOwnerMemberCollaborator(ctx, repo, user.ID)
},
),
repo,
)
}
// CommitStatusesHideActionsURL hide Gitea Actions urls
func CommitStatusesHideActionsURL(ctx context.Context, statuses []*CommitStatus) {
idToRepos := make(map[int64]*repo_model.Repository)

View File

@ -19,8 +19,6 @@ import (
repo_model "code.gitea.io/gitea/models/repo"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/container"
"code.gitea.io/gitea/modules/gitrepo"
"code.gitea.io/gitea/modules/json"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/optional"
"code.gitea.io/gitea/modules/references"
@ -774,41 +772,6 @@ func (c *Comment) CodeCommentLink(ctx context.Context) string {
return fmt.Sprintf("%s/files#%s", c.Issue.Link(), c.HashTag())
}
// LoadPushCommits Load push commits
func (c *Comment) LoadPushCommits(ctx context.Context) (err error) {
if c.Content == "" || c.Commits != nil || c.Type != CommentTypePullRequestPush {
return nil
}
var data PushActionContent
err = json.Unmarshal([]byte(c.Content), &data)
if err != nil {
return err
}
c.IsForcePush = data.IsForcePush
if c.IsForcePush {
if len(data.CommitIDs) != 2 {
return nil
}
c.OldCommit = data.CommitIDs[0]
c.NewCommit = data.CommitIDs[1]
} else {
gitRepo, closer, err := gitrepo.RepositoryFromContextOrOpen(ctx, c.Issue.Repo)
if err != nil {
return err
}
defer closer.Close()
c.Commits = git_model.ConvertFromGitCommit(ctx, gitRepo.GetCommitsFromIDs(data.CommitIDs), c.Issue.Repo)
c.CommitsNum = int64(len(c.Commits))
}
return err
}
// CreateComment creates comment with context
func CreateComment(ctx context.Context, opts *CreateCommentOptions) (_ *Comment, err error) {
ctx, committer, err := db.TxContext(ctx)

View File

@ -86,8 +86,10 @@ func findCodeComments(ctx context.Context, opts FindCommentsOptions, issue *Issu
ids = append(ids, comment.ReviewID)
}
}
if err := e.In("id", ids).Find(&reviews); err != nil {
return nil, err
if len(ids) > 0 {
if err := e.In("id", ids).Find(&reviews); err != nil {
return nil, err
}
}
n := 0

View File

@ -17,6 +17,7 @@ import (
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/container"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/optional"
"code.gitea.io/gitea/modules/setting"
api "code.gitea.io/gitea/modules/structs"
"code.gitea.io/gitea/modules/timeutil"
@ -96,7 +97,7 @@ type Issue struct {
// TODO: RemoveIssueRef: see "repo/issue/branch_selector_field.tmpl"
Ref string
PinOrder int `xorm:"DEFAULT 0"`
PinOrder int `xorm:"-"` // 0 means not loaded, -1 means loaded but not pinned
DeadlineUnix timeutil.TimeStamp `xorm:"INDEX"`
@ -290,6 +291,23 @@ func (issue *Issue) LoadMilestone(ctx context.Context) (err error) {
return nil
}
func (issue *Issue) LoadPinOrder(ctx context.Context) error {
if issue.PinOrder != 0 {
return nil
}
issuePin, err := GetIssuePin(ctx, issue)
if err != nil && !db.IsErrNotExist(err) {
return err
}
if issuePin != nil {
issue.PinOrder = issuePin.PinOrder
} else {
issue.PinOrder = -1
}
return nil
}
// LoadAttributes loads the attribute of this issue.
func (issue *Issue) LoadAttributes(ctx context.Context) (err error) {
if err = issue.LoadRepo(ctx); err != nil {
@ -329,6 +347,10 @@ func (issue *Issue) LoadAttributes(ctx context.Context) (err error) {
return err
}
if err = issue.LoadPinOrder(ctx); err != nil {
return err
}
if err = issue.Comments.LoadAttributes(ctx); err != nil {
return err
}
@ -341,6 +363,14 @@ func (issue *Issue) LoadAttributes(ctx context.Context) (err error) {
return issue.loadReactions(ctx)
}
// IsPinned returns if a Issue is pinned
func (issue *Issue) IsPinned() bool {
if issue.PinOrder == 0 {
setting.PanicInDevOrTesting("issue's pinorder has not been loaded")
}
return issue.PinOrder > 0
}
func (issue *Issue) ResetAttributesLoaded() {
issue.isLabelsLoaded = false
issue.isMilestoneLoaded = false
@ -501,6 +531,45 @@ func GetIssueByIndex(ctx context.Context, repoID, index int64) (*Issue, error) {
return issue, nil
}
func isPullToCond(isPull optional.Option[bool]) builder.Cond {
if isPull.Has() {
return builder.Eq{"is_pull": isPull.Value()}
}
return builder.NewCond()
}
func FindLatestUpdatedIssues(ctx context.Context, repoID int64, isPull optional.Option[bool], pageSize int) (IssueList, error) {
issues := make([]*Issue, 0, pageSize)
err := db.GetEngine(ctx).Where("repo_id = ?", repoID).
And(isPullToCond(isPull)).
OrderBy("updated_unix DESC").
Limit(pageSize).
Find(&issues)
return issues, err
}
func FindIssuesSuggestionByKeyword(ctx context.Context, repoID int64, keyword string, isPull optional.Option[bool], excludedID int64, pageSize int) (IssueList, error) {
cond := builder.NewCond()
if excludedID > 0 {
cond = cond.And(builder.Neq{"`id`": excludedID})
}
// It seems that GitHub searches both title and content (maybe sorting by the search engine's ranking system?)
// The first PR (https://github.com/go-gitea/gitea/pull/32327) uses "search indexer" to search "name(title) + content"
// But it seems that searching "content" (especially LIKE by DB engine) generates worse (unusable) results.
// So now (https://github.com/go-gitea/gitea/pull/33538) it only searches "name(title)", leave the improvements to the future.
cond = cond.And(db.BuildCaseInsensitiveLike("`name`", keyword))
issues := make([]*Issue, 0, pageSize)
err := db.GetEngine(ctx).Where("repo_id = ?", repoID).
And(isPullToCond(isPull)).
And(cond).
OrderBy("updated_unix DESC, `index` DESC").
Limit(pageSize).
Find(&issues)
return issues, err
}
// GetIssueWithAttrsByIndex returns issue by index in a repository.
func GetIssueWithAttrsByIndex(ctx context.Context, repoID, index int64) (*Issue, error) {
issue, err := GetIssueByIndex(ctx, repoID, index)
@ -680,190 +749,6 @@ func (issue *Issue) HasOriginalAuthor() bool {
return issue.OriginalAuthor != "" && issue.OriginalAuthorID != 0
}
var ErrIssueMaxPinReached = util.NewInvalidArgumentErrorf("the max number of pinned issues has been readched")
// IsPinned returns if a Issue is pinned
func (issue *Issue) IsPinned() bool {
return issue.PinOrder != 0
}
// Pin pins a Issue
func (issue *Issue) Pin(ctx context.Context, user *user_model.User) error {
// If the Issue is already pinned, we don't need to pin it twice
if issue.IsPinned() {
return nil
}
var maxPin int
_, err := db.GetEngine(ctx).SQL("SELECT MAX(pin_order) FROM issue WHERE repo_id = ? AND is_pull = ?", issue.RepoID, issue.IsPull).Get(&maxPin)
if err != nil {
return err
}
// Check if the maximum allowed Pins reached
if maxPin >= setting.Repository.Issue.MaxPinned {
return ErrIssueMaxPinReached
}
_, err = db.GetEngine(ctx).Table("issue").
Where("id = ?", issue.ID).
Update(map[string]any{
"pin_order": maxPin + 1,
})
if err != nil {
return err
}
// Add the pin event to the history
opts := &CreateCommentOptions{
Type: CommentTypePin,
Doer: user,
Repo: issue.Repo,
Issue: issue,
}
if _, err = CreateComment(ctx, opts); err != nil {
return err
}
return nil
}
// UnpinIssue unpins a Issue
func (issue *Issue) Unpin(ctx context.Context, user *user_model.User) error {
// If the Issue is not pinned, we don't need to unpin it
if !issue.IsPinned() {
return nil
}
// This sets the Pin for all Issues that come after the unpined Issue to the correct value
_, err := db.GetEngine(ctx).Exec("UPDATE issue SET pin_order = pin_order - 1 WHERE repo_id = ? AND is_pull = ? AND pin_order > ?", issue.RepoID, issue.IsPull, issue.PinOrder)
if err != nil {
return err
}
_, err = db.GetEngine(ctx).Table("issue").
Where("id = ?", issue.ID).
Update(map[string]any{
"pin_order": 0,
})
if err != nil {
return err
}
// Add the unpin event to the history
opts := &CreateCommentOptions{
Type: CommentTypeUnpin,
Doer: user,
Repo: issue.Repo,
Issue: issue,
}
if _, err = CreateComment(ctx, opts); err != nil {
return err
}
return nil
}
// PinOrUnpin pins or unpins a Issue
func (issue *Issue) PinOrUnpin(ctx context.Context, user *user_model.User) error {
if !issue.IsPinned() {
return issue.Pin(ctx, user)
}
return issue.Unpin(ctx, user)
}
// MovePin moves a Pinned Issue to a new Position
func (issue *Issue) MovePin(ctx context.Context, newPosition int) error {
// If the Issue is not pinned, we can't move them
if !issue.IsPinned() {
return nil
}
if newPosition < 1 {
return fmt.Errorf("The Position can't be lower than 1")
}
dbctx, committer, err := db.TxContext(ctx)
if err != nil {
return err
}
defer committer.Close()
var maxPin int
_, err = db.GetEngine(dbctx).SQL("SELECT MAX(pin_order) FROM issue WHERE repo_id = ? AND is_pull = ?", issue.RepoID, issue.IsPull).Get(&maxPin)
if err != nil {
return err
}
// If the new Position bigger than the current Maximum, set it to the Maximum
if newPosition > maxPin+1 {
newPosition = maxPin + 1
}
// Lower the Position of all Pinned Issue that came after the current Position
_, err = db.GetEngine(dbctx).Exec("UPDATE issue SET pin_order = pin_order - 1 WHERE repo_id = ? AND is_pull = ? AND pin_order > ?", issue.RepoID, issue.IsPull, issue.PinOrder)
if err != nil {
return err
}
// Higher the Position of all Pinned Issues that comes after the new Position
_, err = db.GetEngine(dbctx).Exec("UPDATE issue SET pin_order = pin_order + 1 WHERE repo_id = ? AND is_pull = ? AND pin_order >= ?", issue.RepoID, issue.IsPull, newPosition)
if err != nil {
return err
}
_, err = db.GetEngine(dbctx).Table("issue").
Where("id = ?", issue.ID).
Update(map[string]any{
"pin_order": newPosition,
})
if err != nil {
return err
}
return committer.Commit()
}
// GetPinnedIssues returns the pinned Issues for the given Repo and type
func GetPinnedIssues(ctx context.Context, repoID int64, isPull bool) (IssueList, error) {
issues := make(IssueList, 0)
err := db.GetEngine(ctx).
Table("issue").
Where("repo_id = ?", repoID).
And("is_pull = ?", isPull).
And("pin_order > 0").
OrderBy("pin_order").
Find(&issues)
if err != nil {
return nil, err
}
err = issues.LoadAttributes(ctx)
if err != nil {
return nil, err
}
return issues, nil
}
// IsNewPinAllowed returns if a new Issue or Pull request can be pinned
func IsNewPinAllowed(ctx context.Context, repoID int64, isPull bool) (bool, error) {
var maxPin int
_, err := db.GetEngine(ctx).SQL("SELECT COUNT(pin_order) FROM issue WHERE repo_id = ? AND is_pull = ? AND pin_order > 0", repoID, isPull).Get(&maxPin)
if err != nil {
return false, err
}
return maxPin < setting.Repository.Issue.MaxPinned, nil
}
// IsErrIssueMaxPinReached returns if the error is, that the User can't pin more Issues
func IsErrIssueMaxPinReached(err error) bool {
return err == ErrIssueMaxPinReached
}
// InsertIssues insert issues to database
func InsertIssues(ctx context.Context, issues ...*Issue) error {
ctx, committer, err := db.TxContext(ctx)

View File

@ -506,6 +506,39 @@ func (issues IssueList) loadTotalTrackedTimes(ctx context.Context) (err error) {
return nil
}
func (issues IssueList) LoadPinOrder(ctx context.Context) error {
if len(issues) == 0 {
return nil
}
issueIDs := container.FilterSlice(issues, func(issue *Issue) (int64, bool) {
return issue.ID, issue.PinOrder == 0
})
if len(issueIDs) == 0 {
return nil
}
issuePins, err := GetIssuePinsByIssueIDs(ctx, issueIDs)
if err != nil {
return err
}
for _, issue := range issues {
if issue.PinOrder != 0 {
continue
}
for _, pin := range issuePins {
if pin.IssueID == issue.ID {
issue.PinOrder = pin.PinOrder
break
}
}
if issue.PinOrder == 0 {
issue.PinOrder = -1
}
}
return nil
}
// loadAttributes loads all attributes, expect for attachments and comments
func (issues IssueList) LoadAttributes(ctx context.Context) error {
if _, err := issues.LoadRepositories(ctx); err != nil {

246
models/issues/issue_pin.go Normal file
View File

@ -0,0 +1,246 @@
// Copyright 2025 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package issues
import (
"context"
"errors"
"sort"
"code.gitea.io/gitea/models/db"
user_model "code.gitea.io/gitea/models/user"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/util"
)
type IssuePin struct {
ID int64 `xorm:"pk autoincr"`
RepoID int64 `xorm:"UNIQUE(s) NOT NULL"`
IssueID int64 `xorm:"UNIQUE(s) NOT NULL"`
IsPull bool `xorm:"NOT NULL"`
PinOrder int `xorm:"DEFAULT 0"`
}
var ErrIssueMaxPinReached = util.NewInvalidArgumentErrorf("the max number of pinned issues has been readched")
// IsErrIssueMaxPinReached returns if the error is, that the User can't pin more Issues
func IsErrIssueMaxPinReached(err error) bool {
return err == ErrIssueMaxPinReached
}
func init() {
db.RegisterModel(new(IssuePin))
}
func GetIssuePin(ctx context.Context, issue *Issue) (*IssuePin, error) {
pin := new(IssuePin)
has, err := db.GetEngine(ctx).
Where("repo_id = ?", issue.RepoID).
And("issue_id = ?", issue.ID).Get(pin)
if err != nil {
return nil, err
} else if !has {
return nil, db.ErrNotExist{
Resource: "IssuePin",
ID: issue.ID,
}
}
return pin, nil
}
func GetIssuePinsByIssueIDs(ctx context.Context, issueIDs []int64) ([]IssuePin, error) {
var pins []IssuePin
if err := db.GetEngine(ctx).In("issue_id", issueIDs).Find(&pins); err != nil {
return nil, err
}
return pins, nil
}
// Pin pins a Issue
func PinIssue(ctx context.Context, issue *Issue, user *user_model.User) error {
return db.WithTx(ctx, func(ctx context.Context) error {
pinnedIssuesNum, err := getPinnedIssuesNum(ctx, issue.RepoID, issue.IsPull)
if err != nil {
return err
}
// Check if the maximum allowed Pins reached
if pinnedIssuesNum >= setting.Repository.Issue.MaxPinned {
return ErrIssueMaxPinReached
}
pinnedIssuesMaxPinOrder, err := getPinnedIssuesMaxPinOrder(ctx, issue.RepoID, issue.IsPull)
if err != nil {
return err
}
if _, err = db.GetEngine(ctx).Insert(&IssuePin{
RepoID: issue.RepoID,
IssueID: issue.ID,
IsPull: issue.IsPull,
PinOrder: pinnedIssuesMaxPinOrder + 1,
}); err != nil {
return err
}
// Add the pin event to the history
_, err = CreateComment(ctx, &CreateCommentOptions{
Type: CommentTypePin,
Doer: user,
Repo: issue.Repo,
Issue: issue,
})
return err
})
}
// UnpinIssue unpins a Issue
func UnpinIssue(ctx context.Context, issue *Issue, user *user_model.User) error {
return db.WithTx(ctx, func(ctx context.Context) error {
// This sets the Pin for all Issues that come after the unpined Issue to the correct value
cnt, err := db.GetEngine(ctx).Where("issue_id=?", issue.ID).Delete(new(IssuePin))
if err != nil {
return err
}
if cnt == 0 {
return nil
}
// Add the unpin event to the history
_, err = CreateComment(ctx, &CreateCommentOptions{
Type: CommentTypeUnpin,
Doer: user,
Repo: issue.Repo,
Issue: issue,
})
return err
})
}
func getPinnedIssuesNum(ctx context.Context, repoID int64, isPull bool) (int, error) {
var pinnedIssuesNum int
_, err := db.GetEngine(ctx).SQL("SELECT count(pin_order) FROM issue_pin WHERE repo_id = ? AND is_pull = ?", repoID, isPull).Get(&pinnedIssuesNum)
return pinnedIssuesNum, err
}
func getPinnedIssuesMaxPinOrder(ctx context.Context, repoID int64, isPull bool) (int, error) {
var maxPinnedIssuesMaxPinOrder int
_, err := db.GetEngine(ctx).SQL("SELECT max(pin_order) FROM issue_pin WHERE repo_id = ? AND is_pull = ?", repoID, isPull).Get(&maxPinnedIssuesMaxPinOrder)
return maxPinnedIssuesMaxPinOrder, err
}
// MovePin moves a Pinned Issue to a new Position
func MovePin(ctx context.Context, issue *Issue, newPosition int) error {
if newPosition < 1 {
return errors.New("The Position can't be lower than 1")
}
issuePin, err := GetIssuePin(ctx, issue)
if err != nil {
return err
}
if issuePin.PinOrder == newPosition {
return nil
}
return db.WithTx(ctx, func(ctx context.Context) error {
if issuePin.PinOrder > newPosition { // move the issue to a lower position
_, err = db.GetEngine(ctx).Exec("UPDATE issue_pin SET pin_order = pin_order + 1 WHERE repo_id = ? AND is_pull = ? AND pin_order >= ? AND pin_order < ?", issue.RepoID, issue.IsPull, newPosition, issuePin.PinOrder)
} else { // move the issue to a higher position
// Lower the Position of all Pinned Issue that came after the current Position
_, err = db.GetEngine(ctx).Exec("UPDATE issue_pin SET pin_order = pin_order - 1 WHERE repo_id = ? AND is_pull = ? AND pin_order > ? AND pin_order <= ?", issue.RepoID, issue.IsPull, issuePin.PinOrder, newPosition)
}
if err != nil {
return err
}
_, err = db.GetEngine(ctx).
Table("issue_pin").
Where("id = ?", issuePin.ID).
Update(map[string]any{
"pin_order": newPosition,
})
return err
})
}
func GetPinnedIssueIDs(ctx context.Context, repoID int64, isPull bool) ([]int64, error) {
var issuePins []IssuePin
if err := db.GetEngine(ctx).
Table("issue_pin").
Where("repo_id = ?", repoID).
And("is_pull = ?", isPull).
Find(&issuePins); err != nil {
return nil, err
}
sort.Slice(issuePins, func(i, j int) bool {
return issuePins[i].PinOrder < issuePins[j].PinOrder
})
var ids []int64
for _, pin := range issuePins {
ids = append(ids, pin.IssueID)
}
return ids, nil
}
func GetIssuePinsByRepoID(ctx context.Context, repoID int64, isPull bool) ([]*IssuePin, error) {
var pins []*IssuePin
if err := db.GetEngine(ctx).Where("repo_id = ? AND is_pull = ?", repoID, isPull).Find(&pins); err != nil {
return nil, err
}
return pins, nil
}
// GetPinnedIssues returns the pinned Issues for the given Repo and type
func GetPinnedIssues(ctx context.Context, repoID int64, isPull bool) (IssueList, error) {
issuePins, err := GetIssuePinsByRepoID(ctx, repoID, isPull)
if err != nil {
return nil, err
}
if len(issuePins) == 0 {
return IssueList{}, nil
}
ids := make([]int64, 0, len(issuePins))
for _, pin := range issuePins {
ids = append(ids, pin.IssueID)
}
issues := make(IssueList, 0, len(ids))
if err := db.GetEngine(ctx).In("id", ids).Find(&issues); err != nil {
return nil, err
}
for _, issue := range issues {
for _, pin := range issuePins {
if pin.IssueID == issue.ID {
issue.PinOrder = pin.PinOrder
break
}
}
if (!setting.IsProd || setting.IsInTesting) && issue.PinOrder == 0 {
panic("It should not happen that a pinned Issue has no PinOrder")
}
}
sort.Slice(issues, func(i, j int) bool {
return issues[i].PinOrder < issues[j].PinOrder
})
if err = issues.LoadAttributes(ctx); err != nil {
return nil, err
}
return issues, nil
}
// IsNewPinAllowed returns if a new Issue or Pull request can be pinned
func IsNewPinAllowed(ctx context.Context, repoID int64, isPull bool) (bool, error) {
var maxPin int
_, err := db.GetEngine(ctx).SQL("SELECT COUNT(pin_order) FROM issue_pin WHERE repo_id = ? AND is_pull = ?", repoID, isPull).Get(&maxPin)
if err != nil {
return false, err
}
return maxPin < setting.Repository.Issue.MaxPinned, nil
}

View File

@ -38,13 +38,30 @@ func (issue *Issue) projectID(ctx context.Context) int64 {
}
// ProjectColumnID return project column id if issue was assigned to one
func (issue *Issue) ProjectColumnID(ctx context.Context) int64 {
func (issue *Issue) ProjectColumnID(ctx context.Context) (int64, error) {
var ip project_model.ProjectIssue
has, err := db.GetEngine(ctx).Where("issue_id=?", issue.ID).Get(&ip)
if err != nil || !has {
return 0
if err != nil {
return 0, err
} else if !has {
return 0, nil
}
return ip.ProjectColumnID
return ip.ProjectColumnID, nil
}
func LoadProjectIssueColumnMap(ctx context.Context, projectID, defaultColumnID int64) (map[int64]int64, error) {
issues := make([]project_model.ProjectIssue, 0)
if err := db.GetEngine(ctx).Where("project_id=?", projectID).Find(&issues); err != nil {
return nil, err
}
result := make(map[int64]int64, len(issues))
for _, issue := range issues {
if issue.ProjectColumnID == 0 {
issue.ProjectColumnID = defaultColumnID
}
result[issue.IssueID] = issue.ProjectColumnID
}
return result, nil
}
// LoadIssuesFromColumn load issues assigned to this column
@ -59,11 +76,11 @@ func LoadIssuesFromColumn(ctx context.Context, b *project_model.Column, opts *Is
}
if b.Default {
issues, err := Issues(ctx, &IssuesOptions{
ProjectColumnID: db.NoConditionID,
ProjectID: b.ProjectID,
SortType: "project-column-sorting",
})
issues, err := Issues(ctx, opts.Copy(func(o *IssuesOptions) {
o.ProjectColumnID = db.NoConditionID
o.ProjectID = b.ProjectID
o.SortType = "project-column-sorting"
}))
if err != nil {
return nil, err
}
@ -77,19 +94,6 @@ func LoadIssuesFromColumn(ctx context.Context, b *project_model.Column, opts *Is
return issueList, nil
}
// LoadIssuesFromColumnList load issues assigned to the columns
func LoadIssuesFromColumnList(ctx context.Context, bs project_model.ColumnList, opts *IssuesOptions) (map[int64]IssueList, error) {
issuesMap := make(map[int64]IssueList, len(bs))
for i := range bs {
il, err := LoadIssuesFromColumn(ctx, bs[i], opts)
if err != nil {
return nil, err
}
issuesMap[bs[i].ID] = il
}
return issuesMap, nil
}
// IssueAssignOrRemoveProject changes the project associated with an issue
// If newProjectID is 0, the issue is removed from the project
func IssueAssignOrRemoveProject(ctx context.Context, issue *Issue, doer *user_model.User, newProjectID, newColumnID int64) error {
@ -110,7 +114,7 @@ func IssueAssignOrRemoveProject(ctx context.Context, issue *Issue, doer *user_mo
return util.NewPermissionDeniedErrorf("issue %d can't be accessed by project %d", issue.ID, newProject.ID)
}
if newColumnID == 0 {
newDefaultColumn, err := newProject.GetDefaultColumn(ctx)
newDefaultColumn, err := newProject.MustDefaultColumn(ctx)
if err != nil {
return err
}

View File

@ -49,9 +49,9 @@ type IssuesOptions struct { //nolint
// prioritize issues from this repo
PriorityRepoID int64
IsArchived optional.Option[bool]
Org *organization.Organization // issues permission scope
Team *organization.Team // issues permission scope
User *user_model.User // issues permission scope
Owner *user_model.User // issues permission scope, it could be an organization or a user
Team *organization.Team // issues permission scope
Doer *user_model.User // issues permission scope
}
// Copy returns a copy of the options.
@ -273,8 +273,12 @@ func applyConditions(sess *xorm.Session, opts *IssuesOptions) {
applyLabelsCondition(sess, opts)
if opts.User != nil {
sess.And(issuePullAccessibleRepoCond("issue.repo_id", opts.User.ID, opts.Org, opts.Team, opts.IsPull.Value()))
if opts.Owner != nil {
sess.And(repo_model.UserOwnedRepoCond(opts.Owner.ID))
}
if opts.Doer != nil && !opts.Doer.IsAdmin {
sess.And(issuePullAccessibleRepoCond("issue.repo_id", opts.Doer.ID, opts.Owner, opts.Team, opts.IsPull.Value()))
}
}
@ -321,20 +325,20 @@ func teamUnitsRepoCond(id string, userID, orgID, teamID int64, units ...unit.Typ
}
// issuePullAccessibleRepoCond userID must not be zero, this condition require join repository table
func issuePullAccessibleRepoCond(repoIDstr string, userID int64, org *organization.Organization, team *organization.Team, isPull bool) builder.Cond {
func issuePullAccessibleRepoCond(repoIDstr string, userID int64, owner *user_model.User, team *organization.Team, isPull bool) builder.Cond {
cond := builder.NewCond()
unitType := unit.TypeIssues
if isPull {
unitType = unit.TypePullRequests
}
if org != nil {
if owner != nil && owner.IsOrganization() {
if team != nil {
cond = cond.And(teamUnitsRepoCond(repoIDstr, userID, org.ID, team.ID, unitType)) // special team member repos
cond = cond.And(teamUnitsRepoCond(repoIDstr, userID, owner.ID, team.ID, unitType)) // special team member repos
} else {
cond = cond.And(
builder.Or(
repo_model.UserOrgUnitRepoCond(repoIDstr, userID, org.ID, unitType), // team member repos
repo_model.UserOrgPublicUnitRepoCond(userID, org.ID), // user org public non-member repos, TODO: check repo has issues
repo_model.UserOrgUnitRepoCond(repoIDstr, userID, owner.ID, unitType), // team member repos
repo_model.UserOrgPublicUnitRepoCond(userID, owner.ID), // user org public non-member repos, TODO: check repo has issues
),
)
}

View File

@ -373,6 +373,7 @@ func prepareMigrationTasks() []*migration {
// Gitea 1.23.0-rc0 ends at migration ID number 311 (database version 312)
newMigration(312, "Add DeleteBranchAfterMerge to AutoMerge", v1_24.AddDeleteBranchAfterMergeForAutoMerge),
newMigration(313, "Move PinOrder from issue table to a new table issue_pin", v1_24.MovePinOrderToTableIssuePin),
}
return preparedMigrations
}

View File

@ -0,0 +1,31 @@
// Copyright 2025 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package v1_24 //nolint
import (
"code.gitea.io/gitea/models/migrations/base"
"xorm.io/xorm"
)
func MovePinOrderToTableIssuePin(x *xorm.Engine) error {
type IssuePin struct {
ID int64 `xorm:"pk autoincr"`
RepoID int64 `xorm:"UNIQUE(s) NOT NULL"`
IssueID int64 `xorm:"UNIQUE(s) NOT NULL"`
IsPull bool `xorm:"NOT NULL"`
PinOrder int `xorm:"DEFAULT 0"`
}
if err := x.Sync(new(IssuePin)); err != nil {
return err
}
if _, err := x.Exec("INSERT INTO issue_pin (repo_id, issue_id, is_pull, pin_order) SELECT repo_id, id, is_pull, pin_order FROM issue WHERE pin_order > 0"); err != nil {
return err
}
sess := x.NewSession()
defer sess.Close()
return base.DropTableColumns(sess, "issue", "pin_order")
}

View File

@ -228,6 +228,11 @@ func SetRepositoryLink(ctx context.Context, packageID, repoID int64) error {
return err
}
func UnlinkRepository(ctx context.Context, packageID int64) error {
_, err := db.GetEngine(ctx).ID(packageID).Cols("repo_id").Update(&Package{RepoID: 0})
return err
}
// UnlinkRepositoryFromAllPackages unlinks every package from the repository
func UnlinkRepositoryFromAllPackages(ctx context.Context, repoID int64) error {
_, err := db.GetEngine(ctx).Where("repo_id = ?", repoID).Cols("repo_id").Update(&Package{})

View File

@ -152,7 +152,7 @@ func (p *Permission) ReadableUnitTypes() []unit.Type {
}
func (p *Permission) LogString() string {
format := "<Permission AccessMode=%s, %d Units, %d UnitsMode(s): [ "
format := "<Permission AccessMode=%s, %d Units, %d UnitsMode(s): ["
args := []any{p.AccessMode.ToString(), len(p.units), len(p.unitsMode)}
for i, u := range p.units {
@ -164,14 +164,16 @@ func (p *Permission) LogString() string {
config = err.Error()
}
}
format += "\nUnits[%d]: ID: %d RepoID: %d Type: %s Config: %s"
format += "\n\tunits[%d]: ID=%d RepoID=%d Type=%s Config=%s"
args = append(args, i, u.ID, u.RepoID, u.Type.LogString(), config)
}
for key, value := range p.unitsMode {
format += "\nUnitMode[%-v]: %-v"
format += "\n\tunitsMode[%-v]: %-v"
args = append(args, key.LogString(), value.LogString())
}
format += " ]>"
format += "\n\teveryoneAccessMode: %-v"
args = append(args, p.everyoneAccessMode)
format += "\n\t]>"
return fmt.Sprintf(format, args...)
}

View File

@ -48,6 +48,8 @@ type Column struct {
ProjectID int64 `xorm:"INDEX NOT NULL"`
CreatorID int64 `xorm:"NOT NULL"`
NumIssues int64 `xorm:"-"`
CreatedUnix timeutil.TimeStamp `xorm:"INDEX created"`
UpdatedUnix timeutil.TimeStamp `xorm:"INDEX updated"`
}
@ -57,20 +59,6 @@ func (Column) TableName() string {
return "project_board" // TODO: the legacy table name should be project_column
}
// NumIssues return counter of all issues assigned to the column
func (c *Column) NumIssues(ctx context.Context) int {
total, err := db.GetEngine(ctx).Table("project_issue").
Where("project_id=?", c.ProjectID).
And("project_board_id=?", c.ID).
GroupBy("issue_id").
Cols("issue_id").
Count()
if err != nil {
return 0
}
return int(total)
}
func (c *Column) GetIssues(ctx context.Context) ([]*ProjectIssue, error) {
issues := make([]*ProjectIssue, 0, 5)
if err := db.GetEngine(ctx).Where("project_id=?", c.ProjectID).
@ -192,7 +180,7 @@ func deleteColumnByID(ctx context.Context, columnID int64) error {
if err != nil {
return err
}
defaultColumn, err := project.GetDefaultColumn(ctx)
defaultColumn, err := project.MustDefaultColumn(ctx)
if err != nil {
return err
}
@ -257,8 +245,8 @@ func (p *Project) GetColumns(ctx context.Context) (ColumnList, error) {
return columns, nil
}
// GetDefaultColumn return default column and ensure only one exists
func (p *Project) GetDefaultColumn(ctx context.Context) (*Column, error) {
// getDefaultColumn return default column and ensure only one exists
func (p *Project) getDefaultColumn(ctx context.Context) (*Column, error) {
var column Column
has, err := db.GetEngine(ctx).
Where("project_id=? AND `default` = ?", p.ID, true).
@ -270,6 +258,33 @@ func (p *Project) GetDefaultColumn(ctx context.Context) (*Column, error) {
if has {
return &column, nil
}
return nil, ErrProjectColumnNotExist{ColumnID: 0}
}
// MustDefaultColumn returns the default column for a project.
// If one exists, it is returned
// If none exists, the first column will be elevated to the default column of this project
func (p *Project) MustDefaultColumn(ctx context.Context) (*Column, error) {
c, err := p.getDefaultColumn(ctx)
if err != nil && !IsErrProjectColumnNotExist(err) {
return nil, err
}
if c != nil {
return c, nil
}
var column Column
has, err := db.GetEngine(ctx).Where("project_id=?", p.ID).OrderBy("sorting, id").Get(&column)
if err != nil {
return nil, err
}
if has {
column.Default = true
if _, err := db.GetEngine(ctx).ID(column.ID).Cols("`default`").Update(&column); err != nil {
return nil, err
}
return &column, nil
}
// create a default column if none is found
column = Column{

View File

@ -20,19 +20,19 @@ func TestGetDefaultColumn(t *testing.T) {
assert.NoError(t, err)
// check if default column was added
column, err := projectWithoutDefault.GetDefaultColumn(db.DefaultContext)
column, err := projectWithoutDefault.MustDefaultColumn(db.DefaultContext)
assert.NoError(t, err)
assert.Equal(t, int64(5), column.ProjectID)
assert.Equal(t, "Uncategorized", column.Title)
assert.Equal(t, "Done", column.Title)
projectWithMultipleDefaults, err := GetProjectByID(db.DefaultContext, 6)
assert.NoError(t, err)
// check if multiple defaults were removed
column, err = projectWithMultipleDefaults.GetDefaultColumn(db.DefaultContext)
column, err = projectWithMultipleDefaults.MustDefaultColumn(db.DefaultContext)
assert.NoError(t, err)
assert.Equal(t, int64(6), column.ProjectID)
assert.Equal(t, int64(9), column.ID)
assert.Equal(t, int64(9), column.ID) // there are 2 default columns in the test data, use the latest one
// set 8 as default column
assert.NoError(t, SetDefaultColumn(db.DefaultContext, column.ProjectID, 8))

View File

@ -8,7 +8,6 @@ import (
"fmt"
"code.gitea.io/gitea/models/db"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/util"
)
@ -34,48 +33,6 @@ func deleteProjectIssuesByProjectID(ctx context.Context, projectID int64) error
return err
}
// NumIssues return counter of all issues assigned to a project
func (p *Project) NumIssues(ctx context.Context) int {
c, err := db.GetEngine(ctx).Table("project_issue").
Where("project_id=?", p.ID).
GroupBy("issue_id").
Cols("issue_id").
Count()
if err != nil {
log.Error("NumIssues: %v", err)
return 0
}
return int(c)
}
// NumClosedIssues return counter of closed issues assigned to a project
func (p *Project) NumClosedIssues(ctx context.Context) int {
c, err := db.GetEngine(ctx).Table("project_issue").
Join("INNER", "issue", "project_issue.issue_id=issue.id").
Where("project_issue.project_id=? AND issue.is_closed=?", p.ID, true).
Cols("issue_id").
Count()
if err != nil {
log.Error("NumClosedIssues: %v", err)
return 0
}
return int(c)
}
// NumOpenIssues return counter of open issues assigned to a project
func (p *Project) NumOpenIssues(ctx context.Context) int {
c, err := db.GetEngine(ctx).Table("project_issue").
Join("INNER", "issue", "project_issue.issue_id=issue.id").
Where("project_issue.project_id=? AND issue.is_closed=?", p.ID, false).
Cols("issue_id").
Count()
if err != nil {
log.Error("NumOpenIssues: %v", err)
return 0
}
return int(c)
}
func (c *Column) moveIssuesToAnotherColumn(ctx context.Context, newColumn *Column) error {
if c.ProjectID != newColumn.ProjectID {
return fmt.Errorf("columns have to be in the same project")

View File

@ -97,6 +97,9 @@ type Project struct {
Type Type
RenderedContent template.HTML `xorm:"-"`
NumOpenIssues int64 `xorm:"-"`
NumClosedIssues int64 `xorm:"-"`
NumIssues int64 `xorm:"-"`
CreatedUnix timeutil.TimeStamp `xorm:"INDEX created"`
UpdatedUnix timeutil.TimeStamp `xorm:"INDEX updated"`
@ -244,6 +247,10 @@ func GetSearchOrderByBySortType(sortType string) db.SearchOrderBy {
return db.SearchOrderByRecentUpdated
case "leastupdate":
return db.SearchOrderByLeastUpdated
case "alphabetically":
return "title ASC"
case "reversealphabetically":
return "title DESC"
default:
return db.SearchOrderByNewest
}

View File

@ -385,11 +385,12 @@ func (u *User) ValidatePassword(passwd string) bool {
}
// IsPasswordSet checks if the password is set or left empty
// TODO: It's better to clarify the "password" behavior for different types (individual, bot)
func (u *User) IsPasswordSet() bool {
return len(u.Passwd) != 0
return u.Passwd != ""
}
// IsOrganization returns true if user is actually a organization.
// IsOrganization returns true if user is actually an organization.
func (u *User) IsOrganization() bool {
return u.Type == UserTypeOrganization
}
@ -399,13 +400,14 @@ func (u *User) IsIndividual() bool {
return u.Type == UserTypeIndividual
}
func (u *User) IsUser() bool {
return u.Type == UserTypeIndividual || u.Type == UserTypeBot
// IsTypeBot returns whether the user is of type bot
func (u *User) IsTypeBot() bool {
return u.Type == UserTypeBot
}
// IsBot returns whether or not the user is of type bot
func (u *User) IsBot() bool {
return u.Type == UserTypeBot
// IsTokenAccessAllowed returns whether the user is an individual or a bot (which allows for token access)
func (u *User) IsTokenAccessAllowed() bool {
return u.Type == UserTypeIndividual || u.Type == UserTypeBot
}
// DisplayName returns full name if it's not empty,
@ -1127,28 +1129,89 @@ func ValidateCommitWithEmail(ctx context.Context, c *git.Commit) *User {
}
// ValidateCommitsWithEmails checks if authors' e-mails of commits are corresponding to users.
func ValidateCommitsWithEmails(ctx context.Context, oldCommits []*git.Commit) []*UserCommit {
func ValidateCommitsWithEmails(ctx context.Context, oldCommits []*git.Commit) ([]*UserCommit, error) {
var (
emails = make(map[string]*User)
newCommits = make([]*UserCommit, 0, len(oldCommits))
emailSet = make(container.Set[string])
)
for _, c := range oldCommits {
var u *User
if c.Author != nil {
if v, ok := emails[c.Author.Email]; !ok {
u, _ = GetUserByEmail(ctx, c.Author.Email)
emails[c.Author.Email] = u
} else {
u = v
emailSet.Add(c.Author.Email)
}
}
emailUserMap, err := GetUsersByEmails(ctx, emailSet.Values())
if err != nil {
return nil, err
}
for _, c := range oldCommits {
user, ok := emailUserMap[c.Author.Email]
if !ok {
user = &User{
Name: c.Author.Name,
Email: c.Author.Email,
}
}
newCommits = append(newCommits, &UserCommit{
User: u,
User: user,
Commit: c,
})
}
return newCommits
return newCommits, nil
}
func GetUsersByEmails(ctx context.Context, emails []string) (map[string]*User, error) {
if len(emails) == 0 {
return nil, nil
}
needCheckEmails := make(container.Set[string])
needCheckUserNames := make(container.Set[string])
for _, email := range emails {
if strings.HasSuffix(email, fmt.Sprintf("@%s", setting.Service.NoReplyAddress)) {
username := strings.TrimSuffix(email, fmt.Sprintf("@%s", setting.Service.NoReplyAddress))
needCheckUserNames.Add(username)
} else {
needCheckEmails.Add(strings.ToLower(email))
}
}
emailAddresses := make([]*EmailAddress, 0, len(needCheckEmails))
if err := db.GetEngine(ctx).In("lower_email", needCheckEmails.Values()).
And("is_activated=?", true).
Find(&emailAddresses); err != nil {
return nil, err
}
userIDs := make(container.Set[int64])
for _, email := range emailAddresses {
userIDs.Add(email.UID)
}
users, err := GetUsersMapByIDs(ctx, userIDs.Values())
if err != nil {
return nil, err
}
results := make(map[string]*User, len(emails))
for _, email := range emailAddresses {
user := users[email.UID]
if user != nil {
if user.KeepEmailPrivate {
results[user.LowerName+"@"+setting.Service.NoReplyAddress] = user
} else {
results[email.Email] = user
}
}
}
users = make(map[int64]*User, len(needCheckUserNames))
if err := db.GetEngine(ctx).In("lower_name", needCheckUserNames.Values()).Find(&users); err != nil {
return nil, err
}
for _, user := range users {
results[user.LowerName+"@"+setting.Service.NoReplyAddress] = user
}
return results, nil
}
// GetUserByEmail returns the user object by given e-mail if exists.

View File

@ -56,7 +56,7 @@ func NewActionsUser() *User {
Email: ActionsUserEmail,
KeepEmailPrivate: true,
LoginName: ActionsUserName,
Type: UserTypeIndividual,
Type: UserTypeBot,
AllowCreateOrganization: true,
Visibility: structs.VisibleTypePublic,
}

View File

@ -0,0 +1,48 @@
// Copyright 2025 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package actions
import (
"net/http"
actions_model "code.gitea.io/gitea/models/actions"
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/storage"
"code.gitea.io/gitea/services/context"
)
// Artifacts using the v4 backend are stored as a single combined zip file per artifact on the backend
// The v4 backend ensures ContentEncoding is set to "application/zip", which is not the case for the old backend
func IsArtifactV4(art *actions_model.ActionArtifact) bool {
return art.ArtifactName+".zip" == art.ArtifactPath && art.ContentEncoding == "application/zip"
}
func DownloadArtifactV4ServeDirectOnly(ctx *context.Base, art *actions_model.ActionArtifact) (bool, error) {
if setting.Actions.ArtifactStorage.ServeDirect() {
u, err := storage.ActionsArtifacts.URL(art.StoragePath, art.ArtifactPath, nil)
if u != nil && err == nil {
ctx.Redirect(u.String(), http.StatusFound)
return true, nil
}
}
return false, nil
}
func DownloadArtifactV4Fallback(ctx *context.Base, art *actions_model.ActionArtifact) error {
f, err := storage.ActionsArtifacts.Open(art.StoragePath)
if err != nil {
return err
}
defer f.Close()
http.ServeContent(ctx.Resp, ctx.Req, art.ArtifactName+".zip", art.CreatedUnix.AsLocalTime(), f)
return nil
}
func DownloadArtifactV4(ctx *context.Base, art *actions_model.ActionArtifact) error {
ok, err := DownloadArtifactV4ServeDirectOnly(ctx, art)
if ok || err != nil {
return err
}
return DownloadArtifactV4Fallback(ctx, art)
}

View File

@ -46,19 +46,9 @@ func parseLsTreeLine(line []byte) (*LsTreeEntry, error) {
entry.Size = optional.Some(size)
}
switch string(entryMode) {
case "100644":
entry.EntryMode = EntryModeBlob
case "100755":
entry.EntryMode = EntryModeExec
case "120000":
entry.EntryMode = EntryModeSymlink
case "160000":
entry.EntryMode = EntryModeCommit
case "040000", "040755": // git uses 040000 for tree object, but some users may get 040755 for unknown reasons
entry.EntryMode = EntryModeTree
default:
return nil, fmt.Errorf("unknown type: %v", string(entryMode))
entry.EntryMode, err = ParseEntryMode(string(entryMode))
if err != nil || entry.EntryMode == EntryModeNoEntry {
return nil, fmt.Errorf("invalid ls-tree output (invalid mode): %q, err: %w", line, err)
}
entry.ID, err = NewIDFromString(string(entryObjectID))

View File

@ -3,7 +3,10 @@
package git
import "strconv"
import (
"fmt"
"strconv"
)
// EntryMode the type of the object in the git tree
type EntryMode int
@ -11,6 +14,9 @@ type EntryMode int
// There are only a few file modes in Git. They look like unix file modes, but they can only be
// one of these.
const (
// EntryModeNoEntry is possible if the file was added or removed in a commit. In the case of
// added the base commit will not have the file in its tree so a mode of 0o000000 is used.
EntryModeNoEntry EntryMode = 0o000000
// EntryModeBlob
EntryModeBlob EntryMode = 0o100644
// EntryModeExec
@ -33,3 +39,22 @@ func ToEntryMode(value string) EntryMode {
v, _ := strconv.ParseInt(value, 8, 32)
return EntryMode(v)
}
func ParseEntryMode(mode string) (EntryMode, error) {
switch mode {
case "000000":
return EntryModeNoEntry, nil
case "100644":
return EntryModeBlob, nil
case "100755":
return EntryModeExec, nil
case "120000":
return EntryModeSymlink, nil
case "160000":
return EntryModeCommit, nil
case "040000", "040755": // git uses 040000 for tree object, but some users may get 040755 for unknown reasons
return EntryModeTree, nil
default:
return 0, fmt.Errorf("unparsable entry mode: %s", mode)
}
}

View File

@ -260,17 +260,28 @@ func (b *Indexer) Search(ctx context.Context, opts *internal.SearchOptions) (int
var (
indexerQuery query.Query
keywordQuery query.Query
contentQuery query.Query
)
pathQuery := bleve.NewPrefixQuery(strings.ToLower(opts.Keyword))
pathQuery.FieldVal = "Filename"
pathQuery.SetBoost(10)
contentQuery := bleve.NewMatchQuery(opts.Keyword)
contentQuery.FieldVal = "Content"
if opts.IsKeywordFuzzy {
contentQuery.Fuzziness = inner_bleve.GuessFuzzinessByKeyword(opts.Keyword)
keywordAsPhrase, isPhrase := internal.ParseKeywordAsPhrase(opts.Keyword)
if isPhrase {
q := bleve.NewMatchPhraseQuery(keywordAsPhrase)
q.FieldVal = "Content"
if opts.IsKeywordFuzzy {
q.Fuzziness = inner_bleve.GuessFuzzinessByKeyword(keywordAsPhrase)
}
contentQuery = q
} else {
q := bleve.NewMatchQuery(opts.Keyword)
q.FieldVal = "Content"
if opts.IsKeywordFuzzy {
q.Fuzziness = inner_bleve.GuessFuzzinessByKeyword(opts.Keyword)
}
contentQuery = q
}
keywordQuery = bleve.NewDisjunctionQuery(contentQuery, pathQuery)

View File

@ -24,6 +24,7 @@ import (
"code.gitea.io/gitea/modules/setting"
"code.gitea.io/gitea/modules/timeutil"
"code.gitea.io/gitea/modules/typesniffer"
"code.gitea.io/gitea/modules/util"
"github.com/go-enry/go-enry/v2"
"github.com/olivere/elastic/v7"
@ -359,13 +360,19 @@ func extractAggs(searchResult *elastic.SearchResult) []*internal.SearchResultLan
// Search searches for codes and language stats by given conditions.
func (b *Indexer) Search(ctx context.Context, opts *internal.SearchOptions) (int64, []*internal.SearchResult, []*internal.SearchResultLanguages, error) {
searchType := esMultiMatchTypePhrasePrefix
if opts.IsKeywordFuzzy {
searchType = esMultiMatchTypeBestFields
var contentQuery elastic.Query
keywordAsPhrase, isPhrase := internal.ParseKeywordAsPhrase(opts.Keyword)
if isPhrase {
contentQuery = elastic.NewMatchPhraseQuery("content", keywordAsPhrase)
} else {
// TODO: this is the old logic, but not really using "fuzziness"
// * IsKeywordFuzzy=true: "best_fields"
// * IsKeywordFuzzy=false: "phrase_prefix"
contentQuery = elastic.NewMultiMatchQuery("content", opts.Keyword).
Type(util.Iif(opts.IsKeywordFuzzy, esMultiMatchTypeBestFields, esMultiMatchTypePhrasePrefix))
}
kwQuery := elastic.NewBoolQuery().Should(
elastic.NewMultiMatchQuery(opts.Keyword, "content").Type(searchType),
contentQuery,
elastic.NewMultiMatchQuery(opts.Keyword, "filename^10").Type(esMultiMatchTypePhrasePrefix),
)
query := elastic.NewBoolQuery()

View File

@ -0,0 +1,59 @@
// Copyright 2025 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package gitgrep
import (
"context"
"fmt"
"strings"
"code.gitea.io/gitea/modules/git"
code_indexer "code.gitea.io/gitea/modules/indexer/code"
"code.gitea.io/gitea/modules/setting"
)
func indexSettingToGitGrepPathspecList() (list []string) {
for _, expr := range setting.Indexer.IncludePatterns {
list = append(list, ":(glob)"+expr.PatternString())
}
for _, expr := range setting.Indexer.ExcludePatterns {
list = append(list, ":(glob,exclude)"+expr.PatternString())
}
return list
}
func PerformSearch(ctx context.Context, page int, repoID int64, gitRepo *git.Repository, ref git.RefName, keyword string, isFuzzy bool) (searchResults []*code_indexer.Result, total int, err error) {
// TODO: it should also respect ParseKeywordAsPhrase and clarify the "fuzzy" behavior
res, err := git.GrepSearch(ctx, gitRepo, keyword, git.GrepOptions{
ContextLineNumber: 1,
IsFuzzy: isFuzzy,
RefName: ref.String(),
PathspecList: indexSettingToGitGrepPathspecList(),
})
if err != nil {
// TODO: if no branch exists, it reports: exit status 128, fatal: this operation must be run in a work tree.
return nil, 0, fmt.Errorf("git.GrepSearch: %w", err)
}
commitID, err := gitRepo.GetRefCommitID(ref.String())
if err != nil {
return nil, 0, fmt.Errorf("gitRepo.GetRefCommitID: %w", err)
}
total = len(res)
pageStart := min((page-1)*setting.UI.RepoSearchPagingNum, len(res))
pageEnd := min(page*setting.UI.RepoSearchPagingNum, len(res))
res = res[pageStart:pageEnd]
for _, r := range res {
searchResults = append(searchResults, &code_indexer.Result{
RepoID: repoID,
Filename: r.Filename,
CommitID: commitID,
// UpdatedUnix: not supported yet
// Language: not supported yet
// Color: not supported yet
Lines: code_indexer.HighlightSearchResultCode(r.Filename, "", r.LineNumbers, strings.Join(r.LineCodes, "\n")),
})
}
return searchResults, total, nil
}

View File

@ -1,7 +1,7 @@
// Copyright 2024 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package repo
package gitgrep
import (
"testing"

View File

@ -29,13 +29,11 @@ var (
// When the real indexer is not ready, it will be a dummy indexer which will return error to explain it's not ready.
// So it's always safe use it as *globalIndexer.Load() and call its methods.
globalIndexer atomic.Pointer[internal.Indexer]
dummyIndexer *internal.Indexer
)
func init() {
i := internal.NewDummyIndexer()
dummyIndexer = &i
globalIndexer.Store(dummyIndexer)
dummyIndexer := internal.NewDummyIndexer()
globalIndexer.Store(&dummyIndexer)
}
func index(ctx context.Context, indexer internal.Indexer, repoID int64) error {

View File

@ -35,7 +35,7 @@ func FilenameOfIndexerID(indexerID string) string {
return indexerID[index+1:]
}
// Given the contents of file, returns the boundaries of its first seven lines.
// FilenameMatchIndexPos returns the boundaries of its first seven lines.
func FilenameMatchIndexPos(content string) (int, int) {
count := 1
for i, c := range content {
@ -48,3 +48,11 @@ func FilenameMatchIndexPos(content string) (int, int) {
}
return 0, len(content)
}
func ParseKeywordAsPhrase(keyword string) (string, bool) {
if strings.HasPrefix(keyword, `"`) && strings.HasSuffix(keyword, `"`) && len(keyword) > 1 {
// only remove the prefix and suffix quotes, no need to decode the content at the moment
return keyword[1 : len(keyword)-1], true
}
return "", false
}

View File

@ -0,0 +1,30 @@
// Copyright 2025 The Gitea Authors. All rights reserved.
// SPDX-License-Identifier: MIT
package internal
import (
"testing"
"github.com/stretchr/testify/assert"
)
func TestParseKeywordAsPhrase(t *testing.T) {
cases := []struct {
keyword string
phrase string
isPhrase bool
}{
{``, "", false},
{`a`, "", false},
{`"`, "", false},
{`"a`, "", false},
{`"a"`, "a", true},
{`""\"""`, `"\""`, true},
}
for _, c := range cases {
phrase, isPhrase := ParseKeywordAsPhrase(c.keyword)
assert.Equal(t, c.phrase, phrase, "keyword=%q", c.keyword)
assert.Equal(t, c.isPhrase, isPhrase, "keyword=%q", c.keyword)
}
}

View File

@ -73,9 +73,9 @@ func ToDBOptions(ctx context.Context, options *internal.SearchOptions) (*issue_m
UpdatedBeforeUnix: options.UpdatedBeforeUnix.Value(),
PriorityRepoID: 0,
IsArchived: options.IsArchived,
Org: nil,
Owner: nil,
Team: nil,
User: nil,
Doer: nil,
}
if len(options.MilestoneIDs) == 1 && options.MilestoneIDs[0] == 0 {

View File

@ -92,6 +92,11 @@ func getIssueIndexerData(ctx context.Context, issueID int64) (*internal.IndexerD
projectID = issue.Project.ID
}
projectColumnID, err := issue.ProjectColumnID(ctx)
if err != nil {
return nil, false, err
}
return &internal.IndexerData{
ID: issue.ID,
RepoID: issue.RepoID,
@ -106,7 +111,7 @@ func getIssueIndexerData(ctx context.Context, issueID int64) (*internal.IndexerD
NoLabel: len(labels) == 0,
MilestoneID: issue.MilestoneID,
ProjectID: projectID,
ProjectColumnID: issue.ProjectColumnID(ctx),
ProjectColumnID: projectColumnID,
PosterID: issue.PosterID,
AssigneeID: issue.AssigneeID,
MentionIDs: mentionIDs,

View File

@ -5,6 +5,7 @@ package log
import (
"context"
"reflect"
"runtime"
"strings"
"sync"
@ -175,6 +176,20 @@ func (l *LoggerImpl) IsEnabled() bool {
return l.level.Load() < int32(FATAL) && len(l.eventWriters) > 0
}
func asLogStringer(v any) LogStringer {
if s, ok := v.(LogStringer); ok {
return s
} else if a := reflect.ValueOf(v); a.Kind() == reflect.Struct {
// in case the receiver is a pointer, but the value is a struct
vp := reflect.New(a.Type())
vp.Elem().Set(a)
if s, ok := vp.Interface().(LogStringer); ok {
return s
}
}
return nil
}
// Log prepares the log event, if the level matches, the event will be sent to the writers
func (l *LoggerImpl) Log(skip int, level Level, format string, logArgs ...any) {
if Level(l.level.Load()) > level {
@ -207,11 +222,11 @@ func (l *LoggerImpl) Log(skip int, level Level, format string, logArgs ...any) {
// handle LogStringer values
for i, v := range msgArgs {
if cv, ok := v.(*ColoredValue); ok {
if s, ok := cv.v.(LogStringer); ok {
cv.v = logStringFormatter{v: s}
if ls := asLogStringer(cv.v); ls != nil {
cv.v = logStringFormatter{v: ls}
}
} else if s, ok := v.(LogStringer); ok {
msgArgs[i] = logStringFormatter{v: s}
} else if ls := asLogStringer(v); ls != nil {
msgArgs[i] = logStringFormatter{v: ls}
}
}

View File

@ -116,6 +116,14 @@ func (t testLogString) LogString() string {
return "log-string"
}
type testLogStringPtrReceiver struct {
Field string
}
func (t *testLogStringPtrReceiver) LogString() string {
return "log-string-ptr-receiver"
}
func TestLoggerLogString(t *testing.T) {
logger := NewLoggerWithWriters(context.Background(), "test")
@ -124,9 +132,13 @@ func TestLoggerLogString(t *testing.T) {
logger.AddWriters(w1)
logger.Info("%s %s %#v %v", testLogString{}, &testLogString{}, testLogString{Field: "detail"}, NewColoredValue(testLogString{}, FgRed))
logger.Info("%s %s %#v %v", testLogStringPtrReceiver{}, &testLogStringPtrReceiver{}, testLogStringPtrReceiver{Field: "detail"}, NewColoredValue(testLogStringPtrReceiver{}, FgRed))
logger.Close()
assert.Equal(t, []string{"log-string log-string log.testLogString{Field:\"detail\"} \x1b[31mlog-string\x1b[0m\n"}, w1.GetLogs())
assert.Equal(t, []string{
"log-string log-string log.testLogString{Field:\"detail\"} \x1b[31mlog-string\x1b[0m\n",
"log-string-ptr-receiver log-string-ptr-receiver &log.testLogStringPtrReceiver{Field:\"detail\"} \x1b[31mlog-string-ptr-receiver\x1b[0m\n",
}, w1.GetLogs())
}
func TestLoggerExpressionFilter(t *testing.T) {

View File

@ -12,18 +12,17 @@ import (
// Downloader downloads the site repo information
type Downloader interface {
SetContext(context.Context)
GetRepoInfo() (*Repository, error)
GetTopics() ([]string, error)
GetMilestones() ([]*Milestone, error)
GetReleases() ([]*Release, error)
GetLabels() ([]*Label, error)
GetIssues(page, perPage int) ([]*Issue, bool, error)
GetComments(commentable Commentable) ([]*Comment, bool, error)
GetAllComments(page, perPage int) ([]*Comment, bool, error)
GetRepoInfo(ctx context.Context) (*Repository, error)
GetTopics(ctx context.Context) ([]string, error)
GetMilestones(ctx context.Context) ([]*Milestone, error)
GetReleases(ctx context.Context) ([]*Release, error)
GetLabels(ctx context.Context) ([]*Label, error)
GetIssues(ctx context.Context, page, perPage int) ([]*Issue, bool, error)
GetComments(ctx context.Context, commentable Commentable) ([]*Comment, bool, error)
GetAllComments(ctx context.Context, page, perPage int) ([]*Comment, bool, error)
SupportGetRepoComments() bool
GetPullRequests(page, perPage int) ([]*PullRequest, bool, error)
GetReviews(reviewable Reviewable) ([]*Review, error)
GetPullRequests(ctx context.Context, page, perPage int) ([]*PullRequest, bool, error)
GetReviews(ctx context.Context, reviewable Reviewable) ([]*Review, error)
FormatCloneURL(opts MigrateOptions, remoteAddr string) (string, error)
}

View File

@ -13,56 +13,53 @@ type NullDownloader struct{}
var _ Downloader = &NullDownloader{}
// SetContext set context
func (n NullDownloader) SetContext(_ context.Context) {}
// GetRepoInfo returns a repository information
func (n NullDownloader) GetRepoInfo() (*Repository, error) {
func (n NullDownloader) GetRepoInfo(_ context.Context) (*Repository, error) {
return nil, ErrNotSupported{Entity: "RepoInfo"}
}
// GetTopics return repository topics
func (n NullDownloader) GetTopics() ([]string, error) {
func (n NullDownloader) GetTopics(_ context.Context) ([]string, error) {
return nil, ErrNotSupported{Entity: "Topics"}
}
// GetMilestones returns milestones
func (n NullDownloader) GetMilestones() ([]*Milestone, error) {
func (n NullDownloader) GetMilestones(_ context.Context) ([]*Milestone, error) {
return nil, ErrNotSupported{Entity: "Milestones"}
}
// GetReleases returns releases
func (n NullDownloader) GetReleases() ([]*Release, error) {
func (n NullDownloader) GetReleases(_ context.Context) ([]*Release, error) {
return nil, ErrNotSupported{Entity: "Releases"}
}
// GetLabels returns labels
func (n NullDownloader) GetLabels() ([]*Label, error) {
func (n NullDownloader) GetLabels(_ context.Context) ([]*Label, error) {
return nil, ErrNotSupported{Entity: "Labels"}
}
// GetIssues returns issues according start and limit
func (n NullDownloader) GetIssues(page, perPage int) ([]*Issue, bool, error) {
func (n NullDownloader) GetIssues(_ context.Context, page, perPage int) ([]*Issue, bool, error) {
return nil, false, ErrNotSupported{Entity: "Issues"}
}
// GetComments returns comments of an issue or PR
func (n NullDownloader) GetComments(commentable Commentable) ([]*Comment, bool, error) {
func (n NullDownloader) GetComments(_ context.Context, commentable Commentable) ([]*Comment, bool, error) {
return nil, false, ErrNotSupported{Entity: "Comments"}
}
// GetAllComments returns paginated comments
func (n NullDownloader) GetAllComments(page, perPage int) ([]*Comment, bool, error) {
func (n NullDownloader) GetAllComments(_ context.Context, page, perPage int) ([]*Comment, bool, error) {
return nil, false, ErrNotSupported{Entity: "AllComments"}
}
// GetPullRequests returns pull requests according page and perPage
func (n NullDownloader) GetPullRequests(page, perPage int) ([]*PullRequest, bool, error) {
func (n NullDownloader) GetPullRequests(_ context.Context, page, perPage int) ([]*PullRequest, bool, error) {
return nil, false, ErrNotSupported{Entity: "PullRequests"}
}
// GetReviews returns pull requests review
func (n NullDownloader) GetReviews(reviewable Reviewable) ([]*Review, error) {
func (n NullDownloader) GetReviews(_ context.Context, reviewable Reviewable) ([]*Review, error) {
return nil, ErrNotSupported{Entity: "Reviews"}
}

View File

@ -49,21 +49,15 @@ func (d *RetryDownloader) retry(work func() error) error {
return err
}
// SetContext set context
func (d *RetryDownloader) SetContext(ctx context.Context) {
d.ctx = ctx
d.Downloader.SetContext(ctx)
}
// GetRepoInfo returns a repository information with retry
func (d *RetryDownloader) GetRepoInfo() (*Repository, error) {
func (d *RetryDownloader) GetRepoInfo(ctx context.Context) (*Repository, error) {
var (
repo *Repository
err error
)
err = d.retry(func() error {
repo, err = d.Downloader.GetRepoInfo()
repo, err = d.Downloader.GetRepoInfo(ctx)
return err
})
@ -71,14 +65,14 @@ func (d *RetryDownloader) GetRepoInfo() (*Repository, error) {
}
// GetTopics returns a repository's topics with retry
func (d *RetryDownloader) GetTopics() ([]string, error) {
func (d *RetryDownloader) GetTopics(ctx context.Context) ([]string, error) {
var (
topics []string
err error
)
err = d.retry(func() error {
topics, err = d.Downloader.GetTopics()
topics, err = d.Downloader.GetTopics(ctx)
return err
})
@ -86,14 +80,14 @@ func (d *RetryDownloader) GetTopics() ([]string, error) {
}
// GetMilestones returns a repository's milestones with retry
func (d *RetryDownloader) GetMilestones() ([]*Milestone, error) {
func (d *RetryDownloader) GetMilestones(ctx context.Context) ([]*Milestone, error) {
var (
milestones []*Milestone
err error
)
err = d.retry(func() error {
milestones, err = d.Downloader.GetMilestones()
milestones, err = d.Downloader.GetMilestones(ctx)
return err
})
@ -101,14 +95,14 @@ func (d *RetryDownloader) GetMilestones() ([]*Milestone, error) {
}
// GetReleases returns a repository's releases with retry
func (d *RetryDownloader) GetReleases() ([]*Release, error) {
func (d *RetryDownloader) GetReleases(ctx context.Context) ([]*Release, error) {
var (
releases []*Release
err error
)
err = d.retry(func() error {
releases, err = d.Downloader.GetReleases()
releases, err = d.Downloader.GetReleases(ctx)
return err
})
@ -116,14 +110,14 @@ func (d *RetryDownloader) GetReleases() ([]*Release, error) {
}
// GetLabels returns a repository's labels with retry
func (d *RetryDownloader) GetLabels() ([]*Label, error) {
func (d *RetryDownloader) GetLabels(ctx context.Context) ([]*Label, error) {
var (
labels []*Label
err error
)
err = d.retry(func() error {
labels, err = d.Downloader.GetLabels()
labels, err = d.Downloader.GetLabels(ctx)
return err
})
@ -131,7 +125,7 @@ func (d *RetryDownloader) GetLabels() ([]*Label, error) {
}
// GetIssues returns a repository's issues with retry
func (d *RetryDownloader) GetIssues(page, perPage int) ([]*Issue, bool, error) {
func (d *RetryDownloader) GetIssues(ctx context.Context, page, perPage int) ([]*Issue, bool, error) {
var (
issues []*Issue
isEnd bool
@ -139,7 +133,7 @@ func (d *RetryDownloader) GetIssues(page, perPage int) ([]*Issue, bool, error) {
)
err = d.retry(func() error {
issues, isEnd, err = d.Downloader.GetIssues(page, perPage)
issues, isEnd, err = d.Downloader.GetIssues(ctx, page, perPage)
return err
})
@ -147,7 +141,7 @@ func (d *RetryDownloader) GetIssues(page, perPage int) ([]*Issue, bool, error) {
}
// GetComments returns a repository's comments with retry
func (d *RetryDownloader) GetComments(commentable Commentable) ([]*Comment, bool, error) {
func (d *RetryDownloader) GetComments(ctx context.Context, commentable Commentable) ([]*Comment, bool, error) {
var (
comments []*Comment
isEnd bool
@ -155,7 +149,7 @@ func (d *RetryDownloader) GetComments(commentable Commentable) ([]*Comment, bool
)
err = d.retry(func() error {
comments, isEnd, err = d.Downloader.GetComments(commentable)
comments, isEnd, err = d.Downloader.GetComments(ctx, commentable)
return err
})
@ -163,7 +157,7 @@ func (d *RetryDownloader) GetComments(commentable Commentable) ([]*Comment, bool
}
// GetPullRequests returns a repository's pull requests with retry
func (d *RetryDownloader) GetPullRequests(page, perPage int) ([]*PullRequest, bool, error) {
func (d *RetryDownloader) GetPullRequests(ctx context.Context, page, perPage int) ([]*PullRequest, bool, error) {
var (
prs []*PullRequest
err error
@ -171,7 +165,7 @@ func (d *RetryDownloader) GetPullRequests(page, perPage int) ([]*PullRequest, bo
)
err = d.retry(func() error {
prs, isEnd, err = d.Downloader.GetPullRequests(page, perPage)
prs, isEnd, err = d.Downloader.GetPullRequests(ctx, page, perPage)
return err
})
@ -179,14 +173,13 @@ func (d *RetryDownloader) GetPullRequests(page, perPage int) ([]*PullRequest, bo
}
// GetReviews returns pull requests reviews
func (d *RetryDownloader) GetReviews(reviewable Reviewable) ([]*Review, error) {
func (d *RetryDownloader) GetReviews(ctx context.Context, reviewable Reviewable) ([]*Review, error) {
var (
reviews []*Review
err error
)
err = d.retry(func() error {
reviews, err = d.Downloader.GetReviews(reviewable)
reviews, err = d.Downloader.GetReviews(ctx, reviewable)
return err
})

View File

@ -4,20 +4,22 @@
package migration
import "context"
// Uploader uploads all the information of one repository
type Uploader interface {
MaxBatchInsertSize(tp string) int
CreateRepo(repo *Repository, opts MigrateOptions) error
CreateTopics(topic ...string) error
CreateMilestones(milestones ...*Milestone) error
CreateReleases(releases ...*Release) error
SyncTags() error
CreateLabels(labels ...*Label) error
CreateIssues(issues ...*Issue) error
CreateComments(comments ...*Comment) error
CreatePullRequests(prs ...*PullRequest) error
CreateReviews(reviews ...*Review) error
CreateRepo(ctx context.Context, repo *Repository, opts MigrateOptions) error
CreateTopics(ctx context.Context, topic ...string) error
CreateMilestones(ctx context.Context, milestones ...*Milestone) error
CreateReleases(ctx context.Context, releases ...*Release) error
SyncTags(ctx context.Context) error
CreateLabels(ctx context.Context, labels ...*Label) error
CreateIssues(ctx context.Context, issues ...*Issue) error
CreateComments(ctx context.Context, comments ...*Comment) error
CreatePullRequests(ctx context.Context, prs ...*PullRequest) error
CreateReviews(ctx context.Context, reviews ...*Review) error
Rollback() error
Finish() error
Finish(ctx context.Context) error
Close()
}

View File

@ -32,3 +32,67 @@ type ActionTaskResponse struct {
Entries []*ActionTask `json:"workflow_runs"`
TotalCount int64 `json:"total_count"`
}
// CreateActionWorkflowDispatch represents the payload for triggering a workflow dispatch event
// swagger:model
type CreateActionWorkflowDispatch struct {
// required: true
// example: refs/heads/main
Ref string `json:"ref" binding:"Required"`
// required: false
Inputs map[string]string `json:"inputs,omitempty"`
}
// ActionWorkflow represents a ActionWorkflow
type ActionWorkflow struct {
ID string `json:"id"`
Name string `json:"name"`
Path string `json:"path"`
State string `json:"state"`
// swagger:strfmt date-time
CreatedAt time.Time `json:"created_at"`
// swagger:strfmt date-time
UpdatedAt time.Time `json:"updated_at"`
URL string `json:"url"`
HTMLURL string `json:"html_url"`
BadgeURL string `json:"badge_url"`
// swagger:strfmt date-time
DeletedAt time.Time `json:"deleted_at,omitempty"`
}
// ActionWorkflowResponse returns a ActionWorkflow
type ActionWorkflowResponse struct {
Workflows []*ActionWorkflow `json:"workflows"`
TotalCount int64 `json:"total_count"`
}
// ActionArtifact represents a ActionArtifact
type ActionArtifact struct {
ID int64 `json:"id"`
Name string `json:"name"`
SizeInBytes int64 `json:"size_in_bytes"`
URL string `json:"url"`
ArchiveDownloadURL string `json:"archive_download_url"`
Expired bool `json:"expired"`
WorkflowRun *ActionWorkflowRun `json:"workflow_run"`
// swagger:strfmt date-time
CreatedAt time.Time `json:"created_at"`
// swagger:strfmt date-time
UpdatedAt time.Time `json:"updated_at"`
// swagger:strfmt date-time
ExpiresAt time.Time `json:"expires_at"`
}
// ActionWorkflowRun represents a WorkflowRun
type ActionWorkflowRun struct {
ID int64 `json:"id"`
RepositoryID int64 `json:"repository_id"`
HeadSha string `json:"head_sha"`
}
// ActionArtifactsResponse returns ActionArtifacts
type ActionArtifactsResponse struct {
Entries []*ActionArtifact `json:"artifacts"`
TotalCount int64 `json:"total_count"`
}

View File

@ -10,13 +10,16 @@ import (
// Common Errors forming the base of our error system
//
// Many Errors returned by Gitea can be tested against these errors
// using errors.Is.
// Many Errors returned by Gitea can be tested against these errors using "errors.Is".
var (
ErrInvalidArgument = errors.New("invalid argument")
ErrPermissionDenied = errors.New("permission denied")
ErrAlreadyExist = errors.New("resource already exists")
ErrNotExist = errors.New("resource does not exist")
ErrInvalidArgument = errors.New("invalid argument") // also implies HTTP 400
ErrPermissionDenied = errors.New("permission denied") // also implies HTTP 403
ErrNotExist = errors.New("resource does not exist") // also implies HTTP 404
ErrAlreadyExist = errors.New("resource already exists") // also implies HTTP 409
// ErrUnprocessableContent implies HTTP 422, syntax of the request content was correct,
// but server was unable to process the contained instructions
ErrUnprocessableContent = errors.New("unprocessable content")
)
// SilentWrap provides a simple wrapper for a wrapped error where the wrapped error message plays no part in the error message
@ -36,6 +39,22 @@ func (w SilentWrap) Unwrap() error {
return w.Err
}
type LocaleWrap struct {
err error
TrKey string
TrArgs []any
}
// Error returns the message
func (w LocaleWrap) Error() string {
return w.err.Error()
}
// Unwrap returns the underlying error
func (w LocaleWrap) Unwrap() error {
return w.err
}
// NewSilentWrapErrorf returns an error that formats as the given text but unwraps as the provided error
func NewSilentWrapErrorf(unwrap error, message string, args ...any) error {
if len(args) == 0 {
@ -63,3 +82,16 @@ func NewAlreadyExistErrorf(message string, args ...any) error {
func NewNotExistErrorf(message string, args ...any) error {
return NewSilentWrapErrorf(ErrNotExist, message, args...)
}
// ErrWrapLocale wraps an err with a translation key and arguments
func ErrWrapLocale(err error, trKey string, trArgs ...any) error {
return LocaleWrap{err: err, TrKey: trKey, TrArgs: trArgs}
}
func ErrAsLocale(err error) *LocaleWrap {
var e LocaleWrap
if errors.As(err, &e) {
return &e
}
return nil
}

View File

@ -71,3 +71,10 @@ func KeysOfMap[K comparable, V any](m map[K]V) []K {
}
return keys
}
func SliceNilAsEmpty[T any](a []T) []T {
if a == nil {
return []T{}
}
return a
}

View File

@ -385,6 +385,7 @@ show_only_public=Zobrazeny pouze veřejné
issues.in_your_repos=Ve vašich repozitářích
[explore]
repos=Repozitáře
users=Uživatelé

View File

@ -384,6 +384,7 @@ show_only_public=Nur öffentliche anzeigen
issues.in_your_repos=Eigene Repositories
[explore]
repos=Repositories
users=Benutzer

View File

@ -335,6 +335,7 @@ show_only_public=Εμφανίζονται μόνο δημόσια
issues.in_your_repos=Στα αποθετήρια σας
[explore]
repos=Αποθετήρια
users=Χρήστες

View File

@ -385,6 +385,13 @@ show_only_public = Showing only public
issues.in_your_repos = In your repositories
guide_title = No Activity
guide_desc = You are currently not following any repositories or users, so there is no content to display. You can explore repositories or users of interest from the links below.
explore_repos = Explore repositories
explore_users = Explore users
empty_org = There are no organizations yet.
empty_repo = There are no repositories yet.
[explore]
repos = Repositories
users = Users
@ -1695,7 +1702,9 @@ issues.time_estimate_invalid = Time estimate format is invalid
issues.start_tracking_history = started working %s
issues.tracker_auto_close = Timer will be stopped automatically when this issue gets closed
issues.tracking_already_started = `You have already started time tracking on <a href="%s">another issue</a>!`
issues.stop_tracking = Stop Timer
issues.stop_tracking_history = worked for <b>%[1]s</b> %[2]s
issues.cancel_tracking = Discard
issues.cancel_tracking_history = `canceled time tracking %s`
issues.del_time = Delete this time log
issues.add_time_history = added spent time <b>%[1]s</b> %[2]s

View File

@ -333,6 +333,7 @@ show_only_public=Mostrar sólo repositorios públicos
issues.in_your_repos=En tus repositorios
[explore]
repos=Repositorios
users=Usuarios

View File

@ -256,6 +256,7 @@ show_only_public=نمایش دادن موارد عمومی
issues.in_your_repos=در مخازن شما
[explore]
repos=مخازن
users=کاربران

View File

@ -266,6 +266,7 @@ show_only_public=Näytetään vain julkiset
issues.in_your_repos=Repoissasi
[explore]
repos=Repot
users=Käyttäjät

View File

@ -385,6 +385,13 @@ show_only_public=Afficher uniquement les dépôts publics
issues.in_your_repos=Dans vos dépôts
guide_title=Aucune activité
guide_desc=Vous nêtes actuellement abonné à aucun dépôt ou utilisateur et il ny a donc aucun contenu à afficher. Les liens ci-dessous vous permettront dexplorer des dépôts ou des utilisateurs susceptibles de vous intéresser.
explore_repos=Explorer des dépôts
explore_users=Explorer des utilisateurs
empty_org=Il ny a pas encore dorganisations.
empty_repo=Il ny a pas encore de dépôts.
[explore]
repos=Dépôts
users=Utilisateurs
@ -2330,6 +2337,8 @@ settings.event_fork=Bifurcation
settings.event_fork_desc=Dépôt bifurqué.
settings.event_wiki=Wiki
settings.event_wiki_desc=Page wiki créée, renommée, modifiée ou supprimée.
settings.event_statuses=Statuts
settings.event_statuses_desc=Statut de validation mis à jour depuis lAPI.
settings.event_release=Publication
settings.event_release_desc=Publication publiée, mise à jour ou supprimée.
settings.event_push=Soumission
@ -2877,6 +2886,14 @@ view_as_role=Voir en tant que %s
view_as_public_hint=Vous visualisez le README en tant quutilisateur public.
view_as_member_hint=Vous visualisez le README en tant que membre de cette organisation.
worktime=Temps de travail
worktime.date_range_start=Date de début
worktime.date_range_end=Date de fin
worktime.query=Demande
worktime.time=Durée
worktime.by_repositories=Par dépôts
worktime.by_milestones=Par jalons
worktime.by_members=Par membres
[admin]
maintenance=Maintenance

View File

@ -385,6 +385,13 @@ show_only_public=Ag taispeáint poiblí amháin
issues.in_your_repos=I do stórais
guide_title=Gan Ghníomhaíocht
guide_desc=Níl aon stórtha nó úsáideoirí á leanúint agat faoi láthair, mar sin níl aon ábhar le taispeáint. Is féidir leat stórtha nó úsáideoirí spéise a iniúchadh ó na naisc thíos.
explore_repos=Déan stórtha a iniúchadh
explore_users=Déan iniúchadh ar úsáideoirí
empty_org=Níl aon eagraíochtaí ann fós.
empty_repo=Níl aon stórtha ann fós.
[explore]
repos=Stórais
users=Úsáideoirí
@ -2330,6 +2337,8 @@ settings.event_fork=Forc
settings.event_fork_desc=Forcadh stóras.
settings.event_wiki=Vicí
settings.event_wiki_desc=Leathanach Vicí cruthaithe, athainmnithe, curtha in eagar nó scriosta.
settings.event_statuses=Stádais
settings.event_statuses_desc=Nuashonraíodh Stádas Commit ón API.
settings.event_release=Scaoileadh
settings.event_release_desc=Scaoileadh foilsithe, nuashonraithe nó scriosta i stóras.
settings.event_push=Brúigh

View File

@ -225,6 +225,7 @@ show_only_public=Csak publikus mutatása
issues.in_your_repos=A tárolóidban
[explore]
repos=Tárolók
users=Felhasználók

View File

@ -243,6 +243,7 @@ show_private=Pribadi
issues.in_your_repos=Dalam repositori anda
[explore]
repos=Repositori
users=Pengguna

View File

@ -240,6 +240,7 @@ show_only_public=Að sýna aðeins opinber
issues.in_your_repos=Í hugbúnaðarsöfnum þínum
[explore]
repos=Hugbúnaðarsöfn
users=Notendur

View File

@ -277,6 +277,7 @@ show_only_public=Mostrando solo pubblici
issues.in_your_repos=Nei tuoi repository
[explore]
repos=Repository
users=Utenti

View File

@ -385,6 +385,7 @@ show_only_public=公開のみ表示
issues.in_your_repos=あなたのリポジトリ
[explore]
repos=リポジトリ
users=ユーザー

View File

@ -212,6 +212,7 @@ show_private=비공개
issues.in_your_repos=당신의 저장소에
[explore]
repos=저장소
users=유저

View File

@ -338,6 +338,7 @@ show_only_public=Attēlot tikai publiskos
issues.in_your_repos=Jūsu repozitorijos
[explore]
repos=Repozitoriji
users=Lietotāji

View File

@ -276,6 +276,7 @@ show_only_public=Toon alleen opbenbaar
issues.in_your_repos=In uw repositories
[explore]
repos=Repositories
users=Gebruikers

View File

@ -272,6 +272,7 @@ show_only_public=Wyświetlanie tylko publicznych
issues.in_your_repos=W Twoich repozytoriach
[explore]
repos=Repozytoria
users=Użytkownicy

View File

@ -335,6 +335,7 @@ show_only_public=Mostrando somente públicos
issues.in_your_repos=Em seus repositórios
[explore]
repos=Repositórios
users=Usuários

View File

@ -385,6 +385,13 @@ show_only_public=Apresentando somente os públicos
issues.in_your_repos=Nos seus repositórios
guide_title=Sem trabalho
guide_desc=Neste momento não está a seguir repositórios nem utilizadores, por isso não há conteúdo a apresentar. Pode explorar repositórios ou utilizadores de interesse a partir das ligações abaixo.
explore_repos=Explorar repositórios
explore_users=Explorar utilizadores
empty_org=Ainda não há organizações.
empty_repo=Ainda não há repositórios.
[explore]
repos=Repositórios
users=Utilizadores
@ -2330,6 +2337,8 @@ settings.event_fork=Derivar
settings.event_fork_desc=Feita a derivação do repositório.
settings.event_wiki=Wiki
settings.event_wiki_desc=Página do wiki criada, renomeada, editada ou eliminada.
settings.event_statuses=Estados
settings.event_statuses_desc=Estado do cometimento modificado através da API.
settings.event_release=Lançamento
settings.event_release_desc=Lançamento publicado, modificado ou eliminado num repositório.
settings.event_push=Enviar

View File

@ -333,6 +333,7 @@ show_only_public=Показаны только публичные
issues.in_your_repos=В ваших репозиториях
[explore]
repos=Репозитории
users=Пользователи

View File

@ -246,6 +246,7 @@ show_only_public=ප්‍රසිද්ධ පමණක් පෙන්වය
issues.in_your_repos=ඔබගේ කෝෂ්ඨවල
[explore]
repos=කෝෂ්ඨ
users=පරිශීලකයින්

View File

@ -328,6 +328,7 @@ show_only_public=Zobrazuje sa iba verejné
issues.in_your_repos=Vo vašich repozitároch
[explore]
repos=Repozitáre
users=Používatelia

View File

@ -233,6 +233,7 @@ show_only_public=Visar endast publika
issues.in_your_repos=I dina utvecklingskataloger
[explore]
repos=Utvecklingskataloger
users=Användare

View File

@ -380,6 +380,7 @@ show_only_public=Yalnızca açık olanlar gösteriliyor
issues.in_your_repos=Depolarınızda
[explore]
repos=Depolar
users=Kullanıcılar

View File

@ -260,6 +260,7 @@ show_only_public=Показано тільки публічні
issues.in_your_repos=В ваших репозиторіях
[explore]
repos=Репозиторії
users=Користувачі

View File

@ -384,6 +384,7 @@ show_only_public=只显示公开的
issues.in_your_repos=在您的仓库中
[explore]
repos=仓库
users=用户

View File

@ -118,6 +118,7 @@ show_private=私有庫
issues.in_your_repos=屬於該用戶儲存庫的
[explore]
repos=儲存庫
users=使用者

View File

@ -383,6 +383,7 @@ show_only_public=只顯示公開
issues.in_your_repos=在您的儲存庫中
[explore]
repos=儲存庫
users=使用者

4278
package-lock.json generated

File diff suppressed because it is too large Load Diff

View File

@ -4,29 +4,29 @@
"node": ">= 18.0.0"
},
"dependencies": {
"@citation-js/core": "0.7.14",
"@citation-js/plugin-bibtex": "0.7.17",
"@citation-js/plugin-csl": "0.7.14",
"@citation-js/core": "0.7.18",
"@citation-js/plugin-bibtex": "0.7.18",
"@citation-js/plugin-csl": "0.7.18",
"@citation-js/plugin-software-formats": "0.6.1",
"@github/markdown-toolbar-element": "2.2.3",
"@github/relative-time-element": "4.4.5",
"@github/text-expander-element": "2.9.1",
"@mcaptcha/vanilla-glue": "0.1.0-alpha-3",
"@primer/octicons": "19.14.0",
"@primer/octicons": "19.15.0",
"@silverwind/vue3-calendar-heatmap": "2.0.6",
"add-asset-webpack-plugin": "3.0.0",
"ansi_up": "6.0.2",
"asciinema-player": "3.8.2",
"asciinema-player": "3.9.0",
"chart.js": "4.4.7",
"chartjs-adapter-dayjs-4": "1.0.4",
"chartjs-plugin-zoom": "2.2.0",
"clippie": "4.1.4",
"clippie": "4.1.5",
"cropperjs": "1.6.2",
"css-loader": "7.1.2",
"dayjs": "1.11.13",
"dropzone": "6.0.0-beta.2",
"easymde": "2.18.0",
"esbuild-loader": "4.2.2",
"esbuild-loader": "4.3.0",
"escape-goat": "4.0.0",
"fast-glob": "3.3.3",
"htmx.org": "2.0.4",
@ -39,13 +39,13 @@
"minimatch": "10.0.1",
"monaco-editor": "0.52.2",
"monaco-editor-webpack-plugin": "7.1.0",
"pdfobject": "2.3.0",
"pdfobject": "2.3.1",
"perfect-debounce": "1.0.0",
"postcss": "8.5.1",
"postcss": "8.5.2",
"postcss-loader": "8.1.1",
"postcss-nesting": "13.0.1",
"sortablejs": "1.15.6",
"swagger-ui-dist": "5.18.2",
"swagger-ui-dist": "5.18.3",
"tailwindcss": "3.4.17",
"throttle-debounce": "5.0.2",
"tinycolor2": "1.6.0",
@ -59,7 +59,7 @@
"vue-bar-graph": "2.2.0",
"vue-chartjs": "5.3.2",
"vue-loader": "17.4.2",
"webpack": "5.97.1",
"webpack": "5.98.0",
"webpack-cli": "6.0.1",
"wrap-ansi": "9.0.0"
},
@ -67,8 +67,8 @@
"@eslint-community/eslint-plugin-eslint-comments": "4.4.1",
"@playwright/test": "1.49.1",
"@stoplight/spectral-cli": "6.14.2",
"@stylistic/eslint-plugin-js": "2.13.0",
"@stylistic/stylelint-plugin": "3.1.1",
"@stylistic/eslint-plugin-js": "3.1.0",
"@stylistic/stylelint-plugin": "3.1.2",
"@types/dropzone": "5.7.9",
"@types/jquery": "3.5.32",
"@types/katex": "0.16.7",
@ -79,41 +79,40 @@
"@types/throttle-debounce": "5.0.2",
"@types/tinycolor2": "1.4.6",
"@types/toastify-js": "1.12.3",
"@typescript-eslint/eslint-plugin": "8.21.0",
"@typescript-eslint/parser": "8.21.0",
"@typescript-eslint/eslint-plugin": "8.24.0",
"@typescript-eslint/parser": "8.24.0",
"@vitejs/plugin-vue": "5.2.1",
"@vitest/eslint-plugin": "1.1.31",
"eslint": "8.57.0",
"eslint-import-resolver-typescript": "3.7.0",
"eslint-import-resolver-typescript": "3.8.0",
"eslint-plugin-array-func": "4.0.0",
"eslint-plugin-github": "5.0.2",
"eslint-plugin-import-x": "4.6.1",
"eslint-plugin-no-jquery": "3.1.0",
"eslint-plugin-no-use-extend-native": "0.5.0",
"eslint-plugin-playwright": "2.1.0",
"eslint-plugin-playwright": "2.2.0",
"eslint-plugin-regexp": "2.7.0",
"eslint-plugin-sonarjs": "3.0.1",
"eslint-plugin-sonarjs": "3.0.2",
"eslint-plugin-unicorn": "56.0.1",
"eslint-plugin-vitest": "0.4.1",
"eslint-plugin-vitest-globals": "1.5.0",
"eslint-plugin-vue": "9.32.0",
"eslint-plugin-vue-scoped-css": "2.9.0",
"eslint-plugin-wc": "2.2.0",
"happy-dom": "16.7.2",
"markdownlint-cli": "0.43.0",
"happy-dom": "17.1.0",
"markdownlint-cli": "0.44.0",
"nolyfill": "1.0.43",
"postcss-html": "1.8.0",
"stylelint": "16.14.1",
"stylelint-config-recommended": "15.0.0",
"stylelint-declaration-block-no-ignored-properties": "2.8.0",
"stylelint-declaration-strict-value": "1.10.7",
"stylelint-define-config": "16.14.0",
"stylelint-define-config": "16.14.1",
"stylelint-value-no-unknown-custom-properties": "6.0.1",
"svgo": "3.3.2",
"type-fest": "4.33.0",
"updates": "16.4.1",
"vite-string-plugin": "1.4.3",
"vitest": "3.0.3",
"vue-tsc": "2.2.0"
"type-fest": "4.34.1",
"updates": "16.4.2",
"vite-string-plugin": "1.4.4",
"vitest": "3.0.5",
"vue-tsc": "2.2.2"
},
"browserslist": [
"defaults"

10
poetry.lock generated
View File

@ -29,13 +29,14 @@ files = [
[[package]]
name = "cssbeautifier"
version = "1.15.1"
version = "1.15.3"
description = "CSS unobfuscator and beautifier."
optional = false
python-versions = "*"
groups = ["dev"]
files = [
{file = "cssbeautifier-1.15.1.tar.gz", hash = "sha256:9f7064362aedd559c55eeecf6b6bed65e05f33488dcbe39044f0403c26e1c006"},
{file = "cssbeautifier-1.15.3-py3-none-any.whl", hash = "sha256:0dcaf5ce197743a79b3a160b84ea58fcbd9e3e767c96df1171e428125b16d410"},
{file = "cssbeautifier-1.15.3.tar.gz", hash = "sha256:406b04d09e7d62c0be084fbfa2cba5126fe37359ea0d8d9f7b963a6354fc8303"},
]
[package.dependencies]
@ -102,13 +103,14 @@ files = [
[[package]]
name = "jsbeautifier"
version = "1.15.1"
version = "1.15.3"
description = "JavaScript unobfuscator and beautifier."
optional = false
python-versions = "*"
groups = ["dev"]
files = [
{file = "jsbeautifier-1.15.1.tar.gz", hash = "sha256:ebd733b560704c602d744eafc839db60a1ee9326e30a2a80c4adb8718adc1b24"},
{file = "jsbeautifier-1.15.3-py3-none-any.whl", hash = "sha256:b207a15ab7529eee4a35ae7790e9ec4e32a2b5026d51e2d0386c3a65e6ecfc91"},
{file = "jsbeautifier-1.15.3.tar.gz", hash = "sha256:5f1baf3d4ca6a615bb5417ee861b34b77609eeb12875555f8bbfabd9bf2f3457"},
]
[package.dependencies]

View File

@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 16 16" class="svg octicon-square-circle" width="16" height="16" aria-hidden="true"><path d="M8 16A8 8 0 1 1 8 0a8 8 0 0 1 0 16m0-1.5a6.5 6.5 0 1 0 0-13 6.5 6.5 0 0 0 0 13"/><path d="M5 5.75A.75.75 0 0 1 5.75 5h4.5a.75.75 0 0 1 .75.75v4.5a.75.75 0 0 1-.75.75h-4.5a.75.75 0 0 1-.75-.75Z"/></svg>

After

Width:  |  Height:  |  Size: 346 B

Some files were not shown because too many files have changed in this diff Show More