Compare commits

..

3 Commits

Author SHA1 Message Date
9f2f9a6e7b Merge nightly into main 2026-02-23 17:56:15 +01:00
1ee15cd104 Add update channel selection and TMDB setup docs 2026-02-20 13:42:24 +01:00
b56757f42a Merge nightly into main 2026-02-19 20:15:09 +01:00
15 changed files with 498 additions and 1227 deletions

View File

@@ -1,11 +0,0 @@
# Changelog (Dev)
## 0.1.62-dev - 2026-02-24
- Neuer Dev-Stand fuer Genre-Performance (Serienstream).
- Genre-Listen laden strikt nur die angeforderte Seite (on-demand, max. 20 Titel).
- Weitere Seiten werden erst bei `Naechste Seite` geladen.
- Listen-Parser reduziert auf Titel, Serien-URL und Cover.
- Plot wird aus den Karten mit uebernommen und in der Liste angezeigt, falls vorhanden.
- Metadaten werden fuer die jeweils geoeffnete Seite vollstaendig geladen und angezeigt.
- Serien-Infos (inkl. Plot/Art) sind bereits in der Titelauswahl sichtbar, nicht erst in der Staffelansicht.

View File

@@ -1,38 +0,0 @@
# Changelog (Nightly)
## 0.1.62-nightly - 2026-02-24
- Serienstream Genres auf strict on-demand Paging umgestellt:
- Beim Oeffnen eines Genres wird nur Seite 1 geladen (max. 20 Titel).
- Weitere Seiten werden nur bei `Naechste Seite` geladen.
- Listen-Parser fuer Serienstream auf Titel, Serien-URL, Cover und Plot optimiert.
- Serien-Infos (Plot/Art) sind bereits in der Titelauswahl sichtbar.
- Dev-Changelog-Datei eingefuehrt (`CHANGELOG-DEV.md`) fuer `-dev` Builds.
## 0.1.61-nightly - 2026-02-23
- Update-Dialog: feste Auswahl mit `Installieren` / `Abbrechen` (kein vertauschter Yes/No-Dialog mehr).
- Versionen im Update-Dialog nach Kanal gefiltert:
- Main: nur `x.y.z`
- Nightly: nur `x.y.z-nightly`
- Installierte Version wird direkt aus `addon.xml` gelesen.
- Beim Kanalwechsel wird direkt die neueste Version aus dem gewaehlten Kanal installiert.
## 0.1.59-nightly - 2026-02-23
- Enthaelt alle Aenderungen aus `0.1.58`.
- Update-Kanal standardmaessig auf `Nightly`.
- Nightly-Repo-URL als Standard gesetzt.
- Settings-Menue neu sortiert:
- Quellen
- Metadaten
- TMDB Erweitert
- Updates
- Debug Global
- Debug Quellen
- Seitengroesse in Listen auf 20 gesetzt.
- `topstream_genre_max_pages` entfernt.
## Hinweis
- Nightly ist fuer Tests und kann sich kurzfristig aendern.

View File

@@ -1,12 +0,0 @@
# Changelog (Stable)
## 0.1.58 - 2026-02-23
- Menuebezeichnungen vereinheitlicht (`Haeufig gesehen`, `Neuste Titel`).
- `Neue Titel` und `Neueste Folgen` im Menue zu `Neuste Titel` zusammengelegt.
- Hoster-Header-Anpassung zentral nach `resolve_stream_link` eingebaut.
- Hinweis bei Cloudflare-Block durch ResolveURL statt stiller Fehlversuche.
- Update-Einstellungen erweitert (Kanal, manueller Check, optionaler Auto-Check).
- Metadaten-Parsing in AniWorld und Filmpalast nachgezogen (Cover/Plot robuster).
- Topstreamfilm-Suche: fehlender `urlencode`-Import behoben.
- Einige ungenutzte Funktionen entfernt.

View File

@@ -29,6 +29,20 @@ Es durchsucht Provider und startet Streams.
- Plugins: `addon/plugins/*_plugin.py` - Plugins: `addon/plugins/*_plugin.py`
- Settings: `addon/resources/settings.xml` - Settings: `addon/resources/settings.xml`
## TMDB API Key einrichten
- TMDB Account anlegen und API Key (v3) erstellen: `https://www.themoviedb.org/settings/api`
- In Kodi das ViewIT Addon oeffnen: `Einstellungen -> TMDB`
- `TMDB aktivieren` einschalten
- `TMDB API Key` eintragen
- Optional `TMDB Sprache` setzen (z. B. `de-DE`)
- Optional die Anzeige-Optionen aktivieren/deaktivieren:
- `TMDB Beschreibung anzeigen`
- `TMDB Poster und Vorschaubild anzeigen`
- `TMDB Fanart/Backdrop anzeigen`
- `TMDB Bewertung anzeigen`
- `TMDB Stimmen anzeigen`
- `TMDB Besetzung anzeigen`
## Tests ## Tests
- Dev Pakete installieren: `./.venv/bin/pip install -r requirements-dev.txt` - Dev Pakete installieren: `./.venv/bin/pip install -r requirements-dev.txt`
- Tests starten: `./.venv/bin/pytest` - Tests starten: `./.venv/bin/pytest`

View File

@@ -1,5 +1,5 @@
<?xml version='1.0' encoding='utf-8'?> <?xml version='1.0' encoding='utf-8'?>
<addon id="plugin.video.viewit" name="ViewIt" version="0.1.62-nightly" provider-name="ViewIt"> <addon id="plugin.video.viewit" name="ViewIt" version="0.1.57" provider-name="ViewIt">
<requires> <requires>
<import addon="xbmc.python" version="3.0.0" /> <import addon="xbmc.python" version="3.0.0" />
<import addon="script.module.requests" /> <import addon="script.module.requests" />

File diff suppressed because it is too large Load Diff

View File

@@ -15,9 +15,7 @@ from __future__ import annotations
from datetime import datetime from datetime import datetime
import hashlib import hashlib
import os import os
import re
from typing import Optional from typing import Optional
from urllib.parse import parse_qsl, urlencode
try: # pragma: no cover - Kodi runtime try: # pragma: no cover - Kodi runtime
import xbmcaddon # type: ignore[import-not-found] import xbmcaddon # type: ignore[import-not-found]
@@ -239,40 +237,3 @@ def dump_response_html(
max_files = get_setting_int(addon_id, max_files_setting_id, default=200) max_files = get_setting_int(addon_id, max_files_setting_id, default=200)
_prune_dump_files(log_dir, prefix=filename_prefix, max_files=max_files) _prune_dump_files(log_dir, prefix=filename_prefix, max_files=max_files)
_append_text_file(path, content) _append_text_file(path, content)
def normalize_resolved_stream_url(final_url: str, *, source_url: str = "") -> str:
"""Normalisiert hoster-spezifische Header im finalen Stream-Link.
`final_url` kann ein Kodi-Header-Suffix enthalten: `url|Key=Value&...`.
Die Funktion passt nur bekannte Problemfaelle an und laesst sonst alles unveraendert.
"""
url = (final_url or "").strip()
if not url:
return ""
normalized = _normalize_supervideo_serversicuro(url, source_url=source_url)
return normalized
def _normalize_supervideo_serversicuro(final_url: str, *, source_url: str = "") -> str:
if "serversicuro.cc/hls/" not in final_url.casefold() or "|" not in final_url:
return final_url
source = (source_url or "").strip()
code_match = re.search(
r"supervideo\.(?:tv|cc)/(?:e/)?([a-z0-9]+)(?:\\.html)?",
source,
flags=re.IGNORECASE,
)
if not code_match:
return final_url
code = (code_match.group(1) or "").strip()
if not code:
return final_url
media_url, header_suffix = final_url.split("|", 1)
headers = dict(parse_qsl(header_suffix, keep_blank_values=True))
headers["Referer"] = f"https://supervideo.cc/e/{code}"
return f"{media_url}|{urlencode(headers)}"

View File

@@ -833,47 +833,12 @@ class AniworldPlugin(BasisPlugin):
merged_poster = (poster or old_poster or "").strip() merged_poster = (poster or old_poster or "").strip()
self._title_meta[title] = (merged_plot, merged_poster) self._title_meta[title] = (merged_plot, merged_poster)
@staticmethod def _extract_series_metadata(self, soup: BeautifulSoupT) -> tuple[str, str]:
def _is_series_image_url(url: str) -> bool:
value = (url or "").strip().casefold()
if not value:
return False
blocked = (
"/public/img/facebook",
"/public/img/logo",
"aniworld-logo",
"favicon",
"/public/img/german.svg",
"/public/img/japanese-",
)
return not any(marker in value for marker in blocked)
@staticmethod
def _extract_style_url(style_value: str) -> str:
style_value = (style_value or "").strip()
if not style_value:
return ""
match = re.search(r"url\((['\"]?)(.*?)\1\)", style_value, flags=re.IGNORECASE)
if not match:
return ""
return (match.group(2) or "").strip()
def _extract_series_metadata(self, soup: BeautifulSoupT) -> tuple[str, str, str]:
if not soup: if not soup:
return "", "", "" return "", ""
plot = "" plot = ""
poster = "" poster = ""
fanart = ""
root = soup.select_one("#series") or soup
description_node = root.select_one("p.seri_des")
if description_node is not None:
full_text = (description_node.get("data-full-description") or "").strip()
short_text = (description_node.get_text(" ", strip=True) or "").strip()
plot = full_text or short_text
if not plot:
for selector in ("meta[property='og:description']", "meta[name='description']"): for selector in ("meta[property='og:description']", "meta[name='description']"):
node = soup.select_one(selector) node = soup.select_one(selector)
if node is None: if node is None:
@@ -892,61 +857,25 @@ class AniworldPlugin(BasisPlugin):
plot = text plot = text
break break
cover = root.select_one("div.seriesCoverBox img[itemprop='image'], div.seriesCoverBox img")
if cover is not None:
for attr in ("data-src", "src"):
value = (cover.get(attr) or "").strip()
if value:
candidate = _absolute_url(value)
if self._is_series_image_url(candidate):
poster = candidate
break
if not poster:
for selector in ("meta[property='og:image']", "meta[name='twitter:image']"): for selector in ("meta[property='og:image']", "meta[name='twitter:image']"):
node = soup.select_one(selector) node = soup.select_one(selector)
if node is None: if node is None:
continue continue
content = (node.get("content") or "").strip() content = (node.get("content") or "").strip()
if content: if content:
candidate = _absolute_url(content) poster = _absolute_url(content)
if self._is_series_image_url(candidate):
poster = candidate
break break
if not poster: if not poster:
for selector in ("img.seriesCoverBox", ".seriesCoverBox img"): for selector in ("img.seriesCoverBox", ".seriesCoverBox img", "img[alt][src]"):
image = soup.select_one(selector) image = soup.select_one(selector)
if image is None: if image is None:
continue continue
value = (image.get("data-src") or image.get("src") or "").strip() value = (image.get("data-src") or image.get("src") or "").strip()
if value: if value:
candidate = _absolute_url(value) poster = _absolute_url(value)
if self._is_series_image_url(candidate):
poster = candidate
break break
backdrop_node = root.select_one("section.title .backdrop, .SeriesSection .backdrop, .backdrop") return plot, poster
if backdrop_node is not None:
raw_style = (backdrop_node.get("style") or "").strip()
style_url = self._extract_style_url(raw_style)
if style_url:
candidate = _absolute_url(style_url)
if self._is_series_image_url(candidate):
fanart = candidate
if not fanart:
for selector in ("meta[property='og:image']",):
node = soup.select_one(selector)
if node is None:
continue
content = (node.get("content") or "").strip()
if content:
candidate = _absolute_url(content)
if self._is_series_image_url(candidate):
fanart = candidate
break
return plot, poster, fanart
@staticmethod @staticmethod
def _season_links_cache_name(series_url: str) -> str: def _season_links_cache_name(series_url: str) -> str:
@@ -1102,17 +1031,14 @@ class AniworldPlugin(BasisPlugin):
try: try:
soup = _get_soup(series.url, session=get_requests_session("aniworld", headers=HEADERS)) soup = _get_soup(series.url, session=get_requests_session("aniworld", headers=HEADERS))
plot, poster, fanart = self._extract_series_metadata(soup) plot, poster = self._extract_series_metadata(soup)
except Exception: except Exception:
plot, poster, fanart = "", "", "" plot, poster = "", ""
if plot: if plot:
info["plot"] = plot info["plot"] = plot
if poster: if poster:
art = {"thumb": poster, "poster": poster} art = {"thumb": poster, "poster": poster}
if fanart:
art["fanart"] = fanart
art["landscape"] = fanart
self._store_title_meta(title, plot=info.get("plot", ""), poster=poster) self._store_title_meta(title, plot=info.get("plot", ""), poster=poster)
return info, art, None return info, art, None

View File

@@ -603,6 +603,15 @@ class EinschaltenPlugin(BasisPlugin):
url = urljoin(base + "/", path.lstrip("/")) url = urljoin(base + "/", path.lstrip("/"))
return f"{url}?{urlencode({'query': query})}" return f"{url}?{urlencode({'query': query})}"
def _api_movies_url(self, *, with_genres: int, page: int = 1) -> str:
base = self._get_base_url()
if not base:
return ""
params: Dict[str, str] = {"withGenres": str(int(with_genres))}
if page and int(page) > 1:
params["page"] = str(int(page))
return urljoin(base + "/", "api/movies") + f"?{urlencode(params)}"
def _genre_page_url(self, *, genre_id: int, page: int = 1) -> str: def _genre_page_url(self, *, genre_id: int, page: int = 1) -> str:
"""Genre title pages are rendered server-side and embed the movie list in ng-state. """Genre title pages are rendered server-side and embed the movie list in ng-state.
@@ -762,6 +771,23 @@ class EinschaltenPlugin(BasisPlugin):
except Exception: except Exception:
return [] return []
def _fetch_new_titles_movies(self) -> List[MovieItem]:
# "Neue Filme" lives at `/movies/new` and embeds the list in ng-state (`u: "/api/movies"`).
url = self._new_titles_url()
if not url:
return []
try:
_, body = self._http_get_text(url, timeout=20)
payload = _extract_ng_state_payload(body)
movies = _parse_ng_state_movies(payload)
_log_debug_line(f"parse_ng_state_movies:count={len(movies)}")
if movies:
_log_titles(movies, context="new_titles")
return movies
return []
except Exception:
return []
def _fetch_new_titles_movies_page(self, page: int) -> List[MovieItem]: def _fetch_new_titles_movies_page(self, page: int) -> List[MovieItem]:
page = max(1, int(page or 1)) page = max(1, int(page or 1))
url = self._new_titles_url() url = self._new_titles_url()

View File

@@ -735,49 +735,44 @@ class FilmpalastPlugin(BasisPlugin):
def _extract_detail_metadata(self, soup: BeautifulSoupT) -> tuple[str, str]: def _extract_detail_metadata(self, soup: BeautifulSoupT) -> tuple[str, str]:
if not soup: if not soup:
return "", "" return "", ""
root = soup.select_one("div#content[role='main']") or soup
detail = root.select_one("article.detail") or root
plot = "" plot = ""
poster = "" poster = ""
# Filmpalast Detailseite: bevorzugt den dedizierten Filmhandlung-Block.
plot_node = detail.select_one(
"li[itemtype='http://schema.org/Movie'] span[itemprop='description']"
)
if plot_node is not None:
plot = (plot_node.get_text(" ", strip=True) or "").strip()
if not plot:
hidden_plot = detail.select_one("cite span.hidden")
if hidden_plot is not None:
plot = (hidden_plot.get_text(" ", strip=True) or "").strip()
if not plot:
for selector in ("meta[property='og:description']", "meta[name='description']"): for selector in ("meta[property='og:description']", "meta[name='description']"):
node = root.select_one(selector) node = soup.select_one(selector)
if node is None: if node is None:
continue continue
content = (node.get("content") or "").strip() content = (node.get("content") or "").strip()
if content: if content:
plot = content plot = content
break break
if not plot:
for selector in (".toggle-content .coverDetails", ".entry-content p", "article p"):
node = soup.select_one(selector)
if node is None:
continue
text = (node.get_text(" ", strip=True) or "").strip()
if text and len(text) > 40:
plot = text
break
# Filmpalast Detailseite: Cover liegt stabil in `img.cover2`. for selector in ("meta[property='og:image']", "meta[name='twitter:image']"):
cover = detail.select_one("img.cover2") node = soup.select_one(selector)
if cover is not None: if node is None:
value = (cover.get("data-src") or cover.get("src") or "").strip() continue
if value: content = (node.get("content") or "").strip()
candidate = _absolute_url(value) if content:
lower = candidate.casefold() poster = _absolute_url(content)
if "/themes/" not in lower and "spacer.gif" not in lower and "/files/movies/" in lower: break
poster = candidate
if not poster: if not poster:
thumb_node = detail.select_one("li[itemtype='http://schema.org/Movie'] img[itemprop='image']") for selector in ("img.cover", "article img", ".entry-content img"):
if thumb_node is not None: image = soup.select_one(selector)
value = (thumb_node.get("data-src") or thumb_node.get("src") or "").strip() if image is None:
continue
value = (image.get("data-src") or image.get("src") or "").strip()
if value: if value:
candidate = _absolute_url(value) poster = _absolute_url(value)
lower = candidate.casefold() break
if "/themes/" not in lower and "spacer.gif" not in lower and "/files/movies/" in lower:
poster = candidate
return plot, poster return plot, poster

View File

@@ -79,7 +79,6 @@ SESSION_CACHE_PREFIX = "viewit.serienstream"
SESSION_CACHE_MAX_TITLE_URLS = 800 SESSION_CACHE_MAX_TITLE_URLS = 800
CATALOG_SEARCH_TTL_SECONDS = 600 CATALOG_SEARCH_TTL_SECONDS = 600
CATALOG_SEARCH_CACHE_KEY = "catalog_index" CATALOG_SEARCH_CACHE_KEY = "catalog_index"
GENRE_LIST_PAGE_SIZE = 20
_CATALOG_INDEX_MEMORY: tuple[float, List["SeriesResult"]] = (0.0, []) _CATALOG_INDEX_MEMORY: tuple[float, List["SeriesResult"]] = (0.0, [])
ProgressCallback = Optional[Callable[[str, Optional[int]], Any]] ProgressCallback = Optional[Callable[[str, Optional[int]], Any]]
@@ -98,7 +97,6 @@ class SeriesResult:
title: str title: str
description: str description: str
url: str url: str
cover: str = ""
@dataclass @dataclass
@@ -671,9 +669,8 @@ def _load_catalog_index_from_cache() -> Optional[List[SeriesResult]]:
title = str(entry[0] or "").strip() title = str(entry[0] or "").strip()
url = str(entry[1] or "").strip() url = str(entry[1] or "").strip()
description = str(entry[2] or "") if len(entry) > 2 else "" description = str(entry[2] or "") if len(entry) > 2 else ""
cover = str(entry[3] or "").strip() if len(entry) > 3 else ""
if title and url: if title and url:
items.append(SeriesResult(title=title, description=description, url=url, cover=cover)) items.append(SeriesResult(title=title, description=description, url=url))
if items: if items:
_CATALOG_INDEX_MEMORY = (time.time() + CATALOG_SEARCH_TTL_SECONDS, list(items)) _CATALOG_INDEX_MEMORY = (time.time() + CATALOG_SEARCH_TTL_SECONDS, list(items))
return items or None return items or None
@@ -688,7 +685,7 @@ def _store_catalog_index_in_cache(items: List[SeriesResult]) -> None:
for entry in items: for entry in items:
if not entry.title or not entry.url: if not entry.title or not entry.url:
continue continue
payload.append([entry.title, entry.url, entry.description, entry.cover]) payload.append([entry.title, entry.url, entry.description])
_session_cache_set(CATALOG_SEARCH_CACHE_KEY, payload, ttl_seconds=CATALOG_SEARCH_TTL_SECONDS) _session_cache_set(CATALOG_SEARCH_CACHE_KEY, payload, ttl_seconds=CATALOG_SEARCH_TTL_SECONDS)
@@ -1099,7 +1096,7 @@ class SerienstreamPlugin(BasisPlugin):
name = "Serienstream" name = "Serienstream"
version = "1.0.0" version = "1.0.0"
POPULAR_GENRE_LABEL = "Haeufig gesehen" POPULAR_GENRE_LABEL = "⭐ Beliebte Serien"
def __init__(self) -> None: def __init__(self) -> None:
self._series_results: Dict[str, SeriesResult] = {} self._series_results: Dict[str, SeriesResult] = {}
@@ -1110,8 +1107,8 @@ class SerienstreamPlugin(BasisPlugin):
self._episode_label_cache: Dict[Tuple[str, str], Dict[str, EpisodeInfo]] = {} self._episode_label_cache: Dict[Tuple[str, str], Dict[str, EpisodeInfo]] = {}
self._catalog_cache: Optional[Dict[str, List[SeriesResult]]] = None self._catalog_cache: Optional[Dict[str, List[SeriesResult]]] = None
self._genre_group_cache: Dict[str, Dict[str, List[str]]] = {} self._genre_group_cache: Dict[str, Dict[str, List[str]]] = {}
self._genre_page_entries_cache: Dict[Tuple[str, int], List[SeriesResult]] = {} self._genre_page_titles_cache: Dict[Tuple[str, int], List[str]] = {}
self._genre_page_has_more_cache: Dict[Tuple[str, int], bool] = {} self._genre_page_count_cache: Dict[str, int] = {}
self._popular_cache: Optional[List[SeriesResult]] = None self._popular_cache: Optional[List[SeriesResult]] = None
self._requests_available = REQUESTS_AVAILABLE self._requests_available = REQUESTS_AVAILABLE
self._default_preferred_hosters: List[str] = list(DEFAULT_PREFERRED_HOSTERS) self._default_preferred_hosters: List[str] = list(DEFAULT_PREFERRED_HOSTERS)
@@ -1120,7 +1117,6 @@ class SerienstreamPlugin(BasisPlugin):
self._latest_cache: Dict[int, List[LatestEpisode]] = {} self._latest_cache: Dict[int, List[LatestEpisode]] = {}
self._latest_hoster_cache: Dict[str, List[str]] = {} self._latest_hoster_cache: Dict[str, List[str]] = {}
self._series_metadata_cache: Dict[str, Tuple[Dict[str, str], Dict[str, str]]] = {} self._series_metadata_cache: Dict[str, Tuple[Dict[str, str], Dict[str, str]]] = {}
self._series_metadata_full: set[str] = set()
self.is_available = True self.is_available = True
self.unavailable_reason: Optional[str] = None self.unavailable_reason: Optional[str] = None
if not self._requests_available: # pragma: no cover - optional dependency if not self._requests_available: # pragma: no cover - optional dependency
@@ -1413,165 +1409,49 @@ class SerienstreamPlugin(BasisPlugin):
value = re.sub(r"[^a-z0-9]+", "-", value).strip("-") value = re.sub(r"[^a-z0-9]+", "-", value).strip("-")
return value return value
def _cache_list_metadata(self, title: str, description: str = "", cover: str = "") -> None: def _fetch_genre_page_titles(self, genre: str, page: int) -> Tuple[List[str], int]:
key = self._metadata_cache_key(title)
cached = self._series_metadata_cache.get(key)
info = dict(cached[0]) if cached else {}
art = dict(cached[1]) if cached else {}
info.setdefault("title", title)
description = (description or "").strip()
if description and not info.get("plot"):
info["plot"] = description
cover = _absolute_url((cover or "").strip()) if cover else ""
if cover:
art.setdefault("thumb", cover)
art.setdefault("poster", cover)
self._series_metadata_cache[key] = (info, art)
@staticmethod
def _card_description(anchor: BeautifulSoupT) -> str:
if not anchor:
return ""
candidates: List[str] = []
direct = (anchor.get("data-search") or "").strip()
if direct:
candidates.append(direct)
title_attr = (anchor.get("data-title") or "").strip()
if title_attr:
candidates.append(title_attr)
for selector in ("p", ".description", ".desc", ".text-muted", ".small", ".overview"):
node = anchor.select_one(selector)
if node is None:
continue
text = (node.get_text(" ", strip=True) or "").strip()
if text:
candidates.append(text)
parent = anchor.parent if anchor else None
if parent is not None:
parent_data = (parent.get("data-search") or "").strip()
if parent_data:
candidates.append(parent_data)
parent_text = ""
try:
parent_text = (parent.get_text(" ", strip=True) or "").strip()
except Exception:
parent_text = ""
if parent_text and len(parent_text) > 24:
candidates.append(parent_text)
for value in candidates:
cleaned = re.sub(r"\s+", " ", str(value or "")).strip()
if cleaned and len(cleaned) > 12:
return cleaned
return ""
def _parse_genre_entries_from_soup(self, soup: BeautifulSoupT) -> List[SeriesResult]:
entries: List[SeriesResult] = []
seen_urls: set[str] = set()
def _add_entry(title: str, description: str, href: str, cover: str) -> None:
series_url = _absolute_url(href).split("#", 1)[0].split("?", 1)[0].rstrip("/")
if not series_url or "/serie/" not in series_url:
return
if "/staffel-" in series_url or "/episode-" in series_url:
return
if series_url in seen_urls:
return
title = (title or "").strip()
if not title:
return
description = (description or "").strip()
cover_url = _absolute_url((cover or "").strip()) if cover else ""
seen_urls.add(series_url)
self._remember_series_result(title, series_url, description)
self._cache_list_metadata(title, description=description, cover=cover_url)
entries.append(SeriesResult(title=title, description=description, url=series_url, cover=cover_url))
for anchor in soup.select("a.show-card[href]"):
href = (anchor.get("href") or "").strip()
if not href:
continue
img = anchor.select_one("img")
title = (
(img.get("alt") if img else "")
or (anchor.get("title") or "")
or (anchor.get_text(" ", strip=True) or "")
).strip()
description = self._card_description(anchor)
cover = (img.get("data-src") if img else "") or (img.get("src") if img else "")
_add_entry(title, description, href, cover)
if entries:
return entries
for item in soup.select("li.series-item"):
anchor = item.find("a", href=True)
if not anchor:
continue
href = (anchor.get("href") or "").strip()
title = (anchor.get_text(" ", strip=True) or "").strip()
description = (item.get("data-search") or "").strip()
img = anchor.find("img")
cover = (img.get("data-src") if img else "") or (img.get("src") if img else "")
_add_entry(title, description, href, cover)
return entries
def _fetch_genre_page_entries(self, genre: str, page: int) -> Tuple[List[SeriesResult], bool]:
slug = self._genre_slug(genre) slug = self._genre_slug(genre)
if not slug: if not slug:
return [], False return [], 1
cache_key = (slug, page) cache_key = (slug, page)
cached_entries = self._genre_page_entries_cache.get(cache_key) cached = self._genre_page_titles_cache.get(cache_key)
cached_has_more = self._genre_page_has_more_cache.get(cache_key) cached_pages = self._genre_page_count_cache.get(slug)
if cached_entries is not None and cached_has_more is not None: if cached is not None and cached_pages is not None:
return list(cached_entries), bool(cached_has_more) return list(cached), int(cached_pages)
url = f"{_get_base_url()}/genre/{slug}" url = f"{_get_base_url()}/genre/{slug}"
if page > 1: if page > 1:
url = f"{url}?page={int(page)}" url = f"{url}?page={int(page)}"
soup = _get_soup_simple(url) soup = _get_soup_simple(url)
entries = self._parse_genre_entries_from_soup(soup) titles: List[str] = []
seen: set[str] = set()
has_more = False for anchor in soup.select("a.show-card[href]"):
for anchor in soup.select("a[rel='next'][href], a[href*='?page=']"):
href = (anchor.get("href") or "").strip() href = (anchor.get("href") or "").strip()
if not href: series_url = _absolute_url(href).split("#", 1)[0].split("?", 1)[0].rstrip("/")
if "/serie/" not in series_url:
continue continue
img = anchor.select_one("img[alt]")
title = ((img.get("alt") if img else "") or "").strip()
if not title:
continue
key = title.casefold()
if key in seen:
continue
seen.add(key)
self._remember_series_result(title, series_url)
titles.append(title)
max_page = 1
for anchor in soup.select("a[href*='?page=']"):
href = (anchor.get("href") or "").strip()
match = re.search(r"[?&]page=(\d+)", href) match = re.search(r"[?&]page=(\d+)", href)
if not match: if not match:
if "next" in href.casefold():
has_more = True
continue continue
try: try:
if int(match.group(1)) > int(page): max_page = max(max_page, int(match.group(1)))
has_more = True
break
except Exception: except Exception:
continue continue
if len(entries) > GENRE_LIST_PAGE_SIZE: self._genre_page_titles_cache[cache_key] = list(titles)
has_more = True self._genre_page_count_cache[slug] = max_page
entries = entries[:GENRE_LIST_PAGE_SIZE] return list(titles), max_page
self._genre_page_entries_cache[cache_key] = list(entries)
self._genre_page_has_more_cache[cache_key] = bool(has_more)
return list(entries), bool(has_more)
def titles_for_genre_page(self, genre: str, page: int) -> List[str]:
genre = (genre or "").strip()
page = max(1, int(page or 1))
entries, _ = self._fetch_genre_page_entries(genre, page)
return [entry.title for entry in entries if entry.title]
def genre_has_more(self, genre: str, page: int) -> bool:
genre = (genre or "").strip()
page = max(1, int(page or 1))
slug = self._genre_slug(genre)
if not slug:
return False
cache_key = (slug, page)
cached = self._genre_page_has_more_cache.get(cache_key)
if cached is not None:
return bool(cached)
_, has_more = self._fetch_genre_page_entries(genre, page)
return bool(has_more)
def titles_for_genre_group_page(self, genre: str, group_code: str, page: int = 1, page_size: int = 10) -> List[str]: def titles_for_genre_group_page(self, genre: str, group_code: str, page: int = 1, page_size: int = 10) -> List[str]:
genre = (genre or "").strip() genre = (genre or "").strip()
@@ -1581,17 +1461,14 @@ class SerienstreamPlugin(BasisPlugin):
needed = page * page_size + 1 needed = page * page_size + 1
matched: List[str] = [] matched: List[str] = []
try: try:
page_index = 1 _, max_pages = self._fetch_genre_page_titles(genre, 1)
has_more = True for page_index in range(1, max_pages + 1):
while has_more: page_titles, _ = self._fetch_genre_page_titles(genre, page_index)
page_entries, has_more = self._fetch_genre_page_entries(genre, page_index) for title in page_titles:
for entry in page_entries:
title = entry.title
if self._group_matches(group_code, title): if self._group_matches(group_code, title):
matched.append(title) matched.append(title)
if len(matched) >= needed: if len(matched) >= needed:
break break
page_index += 1
start = (page - 1) * page_size start = (page - 1) * page_size
end = start + page_size end = start + page_size
return list(matched[start:end]) return list(matched[start:end])
@@ -1610,17 +1487,14 @@ class SerienstreamPlugin(BasisPlugin):
needed = page * page_size + 1 needed = page * page_size + 1
count = 0 count = 0
try: try:
page_index = 1 _, max_pages = self._fetch_genre_page_titles(genre, 1)
has_more = True for page_index in range(1, max_pages + 1):
while has_more: page_titles, _ = self._fetch_genre_page_titles(genre, page_index)
page_entries, has_more = self._fetch_genre_page_entries(genre, page_index) for title in page_titles:
for entry in page_entries:
title = entry.title
if self._group_matches(group_code, title): if self._group_matches(group_code, title):
count += 1 count += 1
if count >= needed: if count >= needed:
return True return True
page_index += 1
return False return False
except Exception: except Exception:
grouped = self._ensure_genre_group_cache(genre) grouped = self._ensure_genre_group_cache(genre)
@@ -1737,7 +1611,6 @@ class SerienstreamPlugin(BasisPlugin):
cache_key = self._metadata_cache_key(title) cache_key = self._metadata_cache_key(title)
if info_labels or art: if info_labels or art:
self._series_metadata_cache[cache_key] = (info_labels, art) self._series_metadata_cache[cache_key] = (info_labels, art)
self._series_metadata_full.add(cache_key)
base_series_url = _series_root_url(_extract_canonical_url(series_soup, series.url)) base_series_url = _series_root_url(_extract_canonical_url(series_soup, series.url))
season_links = _extract_season_links(series_soup) season_links = _extract_season_links(series_soup)
@@ -1773,7 +1646,7 @@ class SerienstreamPlugin(BasisPlugin):
cache_key = self._metadata_cache_key(title) cache_key = self._metadata_cache_key(title)
cached = self._series_metadata_cache.get(cache_key) cached = self._series_metadata_cache.get(cache_key)
if cached is not None and cache_key in self._series_metadata_full: if cached is not None:
info, art = cached info, art = cached
return dict(info), dict(art), None return dict(info), dict(art), None
@@ -1783,14 +1656,11 @@ class SerienstreamPlugin(BasisPlugin):
self._series_metadata_cache[cache_key] = (dict(info), {}) self._series_metadata_cache[cache_key] = (dict(info), {})
return info, {}, None return info, {}, None
info: Dict[str, str] = dict(cached[0]) if cached else {"title": title} info: Dict[str, str] = {"title": title}
art: Dict[str, str] = dict(cached[1]) if cached else {} art: Dict[str, str] = {}
info.setdefault("title", title)
if series.description: if series.description:
info.setdefault("plot", series.description) info["plot"] = series.description
# Fuer Listenansichten laden wir pro Seite die Detail-Metadaten vollstaendig nach.
loaded_full = False
try: try:
soup = _get_soup(series.url, session=get_requests_session("serienstream", headers=HEADERS)) soup = _get_soup(series.url, session=get_requests_session("serienstream", headers=HEADERS))
parsed_info, parsed_art = _extract_series_metadata(soup) parsed_info, parsed_art = _extract_series_metadata(soup)
@@ -1798,13 +1668,10 @@ class SerienstreamPlugin(BasisPlugin):
info.update(parsed_info) info.update(parsed_info)
if parsed_art: if parsed_art:
art.update(parsed_art) art.update(parsed_art)
loaded_full = True
except Exception: except Exception:
pass pass
self._series_metadata_cache[cache_key] = (dict(info), dict(art)) self._series_metadata_cache[cache_key] = (dict(info), dict(art))
if loaded_full:
self._series_metadata_full.add(cache_key)
return info, art, None return info, art, None
def series_url_for_title(self, title: str) -> str: def series_url_for_title(self, title: str) -> str:
@@ -1875,8 +1742,6 @@ class SerienstreamPlugin(BasisPlugin):
self._season_links_cache.clear() self._season_links_cache.clear()
self._episode_label_cache.clear() self._episode_label_cache.clear()
self._catalog_cache = None self._catalog_cache = None
self._series_metadata_cache.clear()
self._series_metadata_full.clear()
return [] return []
if not self._requests_available: if not self._requests_available:
raise RuntimeError("SerienstreamPlugin kann ohne requests/bs4 nicht suchen.") raise RuntimeError("SerienstreamPlugin kann ohne requests/bs4 nicht suchen.")
@@ -1890,8 +1755,6 @@ class SerienstreamPlugin(BasisPlugin):
self._season_cache.clear() self._season_cache.clear()
self._episode_label_cache.clear() self._episode_label_cache.clear()
self._catalog_cache = None self._catalog_cache = None
self._series_metadata_cache.clear()
self._series_metadata_full.clear()
raise RuntimeError(f"Serienstream-Suche fehlgeschlagen: {exc}") from exc raise RuntimeError(f"Serienstream-Suche fehlgeschlagen: {exc}") from exc
self._series_results = {} self._series_results = {}
for result in results: for result in results:

View File

@@ -66,9 +66,12 @@ SETTING_LOG_URLS = "log_urls_topstreamfilm"
SETTING_DUMP_HTML = "dump_html_topstreamfilm" SETTING_DUMP_HTML = "dump_html_topstreamfilm"
SETTING_SHOW_URL_INFO = "show_url_info_topstreamfilm" SETTING_SHOW_URL_INFO = "show_url_info_topstreamfilm"
SETTING_LOG_ERRORS = "log_errors_topstreamfilm" SETTING_LOG_ERRORS = "log_errors_topstreamfilm"
SETTING_GENRE_MAX_PAGES = "topstream_genre_max_pages"
DEFAULT_TIMEOUT = 20 DEFAULT_TIMEOUT = 20
DEFAULT_PREFERRED_HOSTERS = ["supervideo", "dropload", "voe"] DEFAULT_PREFERRED_HOSTERS = ["supervideo", "dropload", "voe"]
MEINECLOUD_HOST = "meinecloud.click" MEINECLOUD_HOST = "meinecloud.click"
DEFAULT_GENRE_MAX_PAGES = 20
HARD_MAX_GENRE_PAGES = 200
HEADERS = { HEADERS = {
"User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)", "User-Agent": "Mozilla/5.0 (Kodi; ViewIt) AppleWebKit/537.36 (KHTML, like Gecko)",
"Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
@@ -344,6 +347,22 @@ class TopstreamfilmPlugin(BasisPlugin):
return urljoin(base if base.endswith("/") else base + "/", href) return urljoin(base if base.endswith("/") else base + "/", href)
return href return href
def _get_setting_bool(self, setting_id: str, *, default: bool = False) -> bool:
return get_setting_bool(ADDON_ID, setting_id, default=default)
def _get_setting_int(self, setting_id: str, *, default: int) -> int:
if xbmcaddon is None:
return default
try:
addon = xbmcaddon.Addon(ADDON_ID)
getter = getattr(addon, "getSettingInt", None)
if callable(getter):
return int(getter(setting_id))
raw = str(addon.getSetting(setting_id) or "").strip()
return int(raw) if raw else default
except Exception:
return default
def _notify_url(self, url: str) -> None: def _notify_url(self, url: str) -> None:
notify_url( notify_url(
ADDON_ID, ADDON_ID,

View File

@@ -8,16 +8,8 @@ from __future__ import annotations
from typing import Optional from typing import Optional
_LAST_RESOLVE_ERROR = ""
def get_last_error() -> str:
return str(_LAST_RESOLVE_ERROR or "")
def resolve(url: str) -> Optional[str]: def resolve(url: str) -> Optional[str]:
global _LAST_RESOLVE_ERROR
_LAST_RESOLVE_ERROR = ""
if not url: if not url:
return None return None
try: try:
@@ -31,14 +23,12 @@ def resolve(url: str) -> Optional[str]:
hmf = hosted(url) hmf = hosted(url)
valid = getattr(hmf, "valid_url", None) valid = getattr(hmf, "valid_url", None)
if callable(valid) and not valid(): if callable(valid) and not valid():
_LAST_RESOLVE_ERROR = "invalid url"
return None return None
resolver = getattr(hmf, "resolve", None) resolver = getattr(hmf, "resolve", None)
if callable(resolver): if callable(resolver):
result = resolver() result = resolver()
return str(result) if result else None return str(result) if result else None
except Exception as exc: except Exception:
_LAST_RESOLVE_ERROR = str(exc or "")
pass pass
try: try:
@@ -46,8 +36,8 @@ def resolve(url: str) -> Optional[str]:
if callable(resolve_fn): if callable(resolve_fn):
result = resolve_fn(url) result = resolve_fn(url)
return str(result) if result else None return str(result) if result else None
except Exception as exc: except Exception:
_LAST_RESOLVE_ERROR = str(exc or "")
return None return None
return None return None

View File

@@ -1,66 +1,6 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<settings> <settings>
<category label="Quellen"> <category label="Debug und Logs">
<setting id="serienstream_base_url" type="text" label="SerienStream Basis-URL" default="https://s.to" />
<setting id="aniworld_base_url" type="text" label="AniWorld Basis-URL" default="https://aniworld.to" />
<setting id="topstream_base_url" type="text" label="TopStream Basis-URL" default="https://topstreamfilm.live" />
<setting id="einschalten_base_url" type="text" label="Einschalten Basis-URL" default="https://einschalten.in" />
<setting id="filmpalast_base_url" type="text" label="Filmpalast Basis-URL" default="https://filmpalast.to" />
<setting id="doku_streams_base_url" type="text" label="Doku-Streams Basis-URL" default="https://doku-streams.com" />
</category>
<category label="Metadaten">
<setting id="serienstream_metadata_source" type="enum" label="SerienStream Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="aniworld_metadata_source" type="enum" label="AniWorld Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="topstreamfilm_metadata_source" type="enum" label="TopStream Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="einschalten_metadata_source" type="enum" label="Einschalten Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="filmpalast_metadata_source" type="enum" label="Filmpalast Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="doku_streams_metadata_source" type="enum" label="Doku-Streams Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="tmdb_enabled" type="bool" label="TMDB aktivieren" default="true" />
<setting id="tmdb_language" type="text" label="TMDB Sprache (z. B. de-DE)" default="de-DE" />
<setting id="tmdb_show_plot" type="bool" label="TMDB Beschreibung anzeigen" default="true" />
<setting id="tmdb_show_art" type="bool" label="TMDB Poster und Vorschaubild anzeigen" default="true" />
<setting id="tmdb_show_fanart" type="bool" label="TMDB Fanart/Backdrop anzeigen" default="true" />
<setting id="tmdb_show_rating" type="bool" label="TMDB Bewertung anzeigen" default="true" />
<setting id="tmdb_show_votes" type="bool" label="TMDB Stimmen anzeigen" default="false" />
</category>
<category label="TMDB Erweitert">
<setting id="tmdb_api_key" type="text" label="TMDB API Key" default="" />
<setting id="tmdb_prefetch_concurrency" type="number" label="TMDB: gleichzeitige Anfragen (1-20)" default="6" />
<setting id="tmdb_show_cast" type="bool" label="TMDB Besetzung anzeigen" default="false" />
<setting id="tmdb_show_episode_cast" type="bool" label="TMDB Besetzung pro Episode anzeigen" default="false" />
<setting id="tmdb_genre_metadata" type="bool" label="TMDB Daten in Genre-Listen anzeigen" default="false" />
<setting id="tmdb_log_requests" type="bool" label="TMDB API-Anfragen loggen" default="false" />
<setting id="tmdb_log_responses" type="bool" label="TMDB API-Antworten loggen" default="false" />
</category>
<category label="Updates">
<setting id="update_channel" type="enum" label="Update-Kanal" default="1" values="Main|Nightly|Custom" />
<setting id="apply_update_channel" type="action" label="Update-Kanal jetzt anwenden" action="RunPlugin(plugin://plugin.video.viewit/?action=apply_update_channel)" option="close" />
<setting id="auto_update_enabled" type="bool" label="Automatische Updates (beim Start pruefen)" default="false" />
<setting id="select_update_version" type="action" label="Version waehlen und installieren" action="RunPlugin(plugin://plugin.video.viewit/?action=select_update_version)" option="close" />
<setting id="update_installed_version" type="text" label="Installierte Version" default="-" enable="false" />
<setting id="update_available_selected" type="text" label="Verfuegbar (gewaehlter Kanal)" default="-" enable="false" />
<setting id="update_available_main" type="text" label="Verfuegbar Main" default="-" enable="false" />
<setting id="update_available_nightly" type="text" label="Verfuegbar Nightly" default="-" enable="false" />
<setting id="update_active_channel" type="text" label="Aktiver Kanal" default="-" enable="false" />
<setting id="update_active_repo_url" type="text" label="Aktive Repo URL" default="-" enable="false" />
<setting id="update_info" type="text" label="Updates laufen ueber den normalen Kodi-Update-Mechanismus." default="" enable="false" />
<setting id="update_repo_url_main" type="text" label="Main URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/main/addons.xml" />
<setting id="update_repo_url_nightly" type="text" label="Nightly URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/nightly/addons.xml" />
<setting id="update_repo_url" type="text" label="Custom URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/nightly/addons.xml" />
<setting id="auto_update_last_ts" type="text" label="Auto-Update letzte Pruefung (intern)" default="0" visible="false" />
<setting id="update_version_addon" type="text" label="ViewIT Version" default="-" visible="false" />
<setting id="update_version_serienstream" type="text" label="SerienStream Version" default="-" visible="false" />
<setting id="update_version_aniworld" type="text" label="AniWorld Version" default="-" visible="false" />
<setting id="update_version_einschalten" type="text" label="Einschalten Version" default="-" visible="false" />
<setting id="update_version_topstreamfilm" type="text" label="TopStream Version" default="-" visible="false" />
<setting id="update_version_filmpalast" type="text" label="Filmpalast Version" default="-" visible="false" />
<setting id="update_version_doku_streams" type="text" label="Doku-Streams Version" default="-" visible="false" />
</category>
<category label="Debug Global">
<setting id="debug_log_urls" type="bool" label="URLs mitschreiben (global)" default="false" /> <setting id="debug_log_urls" type="bool" label="URLs mitschreiben (global)" default="false" />
<setting id="debug_dump_html" type="bool" label="HTML speichern (global)" default="false" /> <setting id="debug_dump_html" type="bool" label="HTML speichern (global)" default="false" />
<setting id="debug_show_url_info" type="bool" label="Aktuelle URL anzeigen (global)" default="false" /> <setting id="debug_show_url_info" type="bool" label="Aktuelle URL anzeigen (global)" default="false" />
@@ -68,32 +8,81 @@
<setting id="log_max_mb" type="number" label="URL-Log: maximale Dateigroesse (MB)" default="5" /> <setting id="log_max_mb" type="number" label="URL-Log: maximale Dateigroesse (MB)" default="5" />
<setting id="log_max_files" type="number" label="URL-Log: Anzahl alter Dateien" default="3" /> <setting id="log_max_files" type="number" label="URL-Log: Anzahl alter Dateien" default="3" />
<setting id="dump_max_files" type="number" label="HTML: maximale Dateien pro Plugin" default="200" /> <setting id="dump_max_files" type="number" label="HTML: maximale Dateien pro Plugin" default="200" />
</category> <setting id="log_urls_serienstream" type="bool" label="Serienstream: URLs mitschreiben" default="false" />
<setting id="dump_html_serienstream" type="bool" label="Serienstream: HTML speichern" default="false" />
<category label="Debug Quellen"> <setting id="show_url_info_serienstream" type="bool" label="Serienstream: Aktuelle URL anzeigen" default="false" />
<setting id="log_urls_serienstream" type="bool" label="SerienStream: URLs mitschreiben" default="false" /> <setting id="log_errors_serienstream" type="bool" label="Serienstream: Fehler mitschreiben" default="false" />
<setting id="dump_html_serienstream" type="bool" label="SerienStream: HTML speichern" default="false" /> <setting id="log_urls_aniworld" type="bool" label="Aniworld: URLs mitschreiben" default="false" />
<setting id="show_url_info_serienstream" type="bool" label="SerienStream: Aktuelle URL anzeigen" default="false" /> <setting id="dump_html_aniworld" type="bool" label="Aniworld: HTML speichern" default="false" />
<setting id="log_errors_serienstream" type="bool" label="SerienStream: Fehler mitschreiben" default="false" /> <setting id="show_url_info_aniworld" type="bool" label="Aniworld: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_aniworld" type="bool" label="Aniworld: Fehler mitschreiben" default="false" />
<setting id="log_urls_aniworld" type="bool" label="AniWorld: URLs mitschreiben" default="false" /> <setting id="log_urls_topstreamfilm" type="bool" label="Topstreamfilm: URLs mitschreiben" default="false" />
<setting id="dump_html_aniworld" type="bool" label="AniWorld: HTML speichern" default="false" /> <setting id="dump_html_topstreamfilm" type="bool" label="Topstreamfilm: HTML speichern" default="false" />
<setting id="show_url_info_aniworld" type="bool" label="AniWorld: Aktuelle URL anzeigen" default="false" /> <setting id="show_url_info_topstreamfilm" type="bool" label="Topstreamfilm: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_aniworld" type="bool" label="AniWorld: Fehler mitschreiben" default="false" /> <setting id="log_errors_topstreamfilm" type="bool" label="Topstreamfilm: Fehler mitschreiben" default="false" />
<setting id="log_urls_topstreamfilm" type="bool" label="TopStream: URLs mitschreiben" default="false" />
<setting id="dump_html_topstreamfilm" type="bool" label="TopStream: HTML speichern" default="false" />
<setting id="show_url_info_topstreamfilm" type="bool" label="TopStream: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_topstreamfilm" type="bool" label="TopStream: Fehler mitschreiben" default="false" />
<setting id="log_urls_einschalten" type="bool" label="Einschalten: URLs mitschreiben" default="false" /> <setting id="log_urls_einschalten" type="bool" label="Einschalten: URLs mitschreiben" default="false" />
<setting id="dump_html_einschalten" type="bool" label="Einschalten: HTML speichern" default="false" /> <setting id="dump_html_einschalten" type="bool" label="Einschalten: HTML speichern" default="false" />
<setting id="show_url_info_einschalten" type="bool" label="Einschalten: Aktuelle URL anzeigen" default="false" /> <setting id="show_url_info_einschalten" type="bool" label="Einschalten: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_einschalten" type="bool" label="Einschalten: Fehler mitschreiben" default="false" /> <setting id="log_errors_einschalten" type="bool" label="Einschalten: Fehler mitschreiben" default="false" />
<setting id="log_urls_filmpalast" type="bool" label="Filmpalast: URLs mitschreiben" default="false" /> <setting id="log_urls_filmpalast" type="bool" label="Filmpalast: URLs mitschreiben" default="false" />
<setting id="dump_html_filmpalast" type="bool" label="Filmpalast: HTML speichern" default="false" /> <setting id="dump_html_filmpalast" type="bool" label="Filmpalast: HTML speichern" default="false" />
<setting id="show_url_info_filmpalast" type="bool" label="Filmpalast: Aktuelle URL anzeigen" default="false" /> <setting id="show_url_info_filmpalast" type="bool" label="Filmpalast: Aktuelle URL anzeigen" default="false" />
<setting id="log_errors_filmpalast" type="bool" label="Filmpalast: Fehler mitschreiben" default="false" /> <setting id="log_errors_filmpalast" type="bool" label="Filmpalast: Fehler mitschreiben" default="false" />
</category> </category>
<category label="TopStream">
<setting id="topstream_base_url" type="text" label="Basis-URL" default="https://topstreamfilm.live" />
<setting id="topstreamfilm_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
<setting id="topstream_genre_max_pages" type="number" label="Genres: max. Seiten laden" default="20" />
</category>
<category label="SerienStream">
<setting id="serienstream_base_url" type="text" label="Basis-URL" default="https://s.to" />
<setting id="serienstream_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category>
<category label="AniWorld">
<setting id="aniworld_base_url" type="text" label="Basis-URL" default="https://aniworld.to" />
<setting id="aniworld_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category>
<category label="Einschalten">
<setting id="einschalten_base_url" type="text" label="Basis-URL" default="https://einschalten.in" />
<setting id="einschalten_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category>
<category label="Filmpalast">
<setting id="filmpalast_base_url" type="text" label="Basis-URL" default="https://filmpalast.to" />
<setting id="filmpalast_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category>
<category label="Doku-Streams">
<setting id="doku_streams_base_url" type="text" label="Basis-URL" default="https://doku-streams.com" />
<setting id="doku_streams_metadata_source" type="enum" label="Metadatenquelle" default="0" values="Automatisch|Quelle|TMDB|Mischen" />
</category>
<category label="TMDB">
<setting id="tmdb_enabled" type="bool" label="TMDB aktivieren" default="true" />
<setting id="tmdb_api_key" type="text" label="TMDB API Key" default="" />
<setting id="tmdb_language" type="text" label="TMDB Sprache (z. B. de-DE)" default="de-DE" />
<setting id="tmdb_prefetch_concurrency" type="number" label="TMDB: gleichzeitige Anfragen (1-20)" default="6" />
<setting id="tmdb_show_plot" type="bool" label="TMDB Beschreibung anzeigen" default="true" />
<setting id="tmdb_show_art" type="bool" label="TMDB Poster und Vorschaubild anzeigen" default="true" />
<setting id="tmdb_show_fanart" type="bool" label="TMDB Fanart/Backdrop anzeigen" default="true" />
<setting id="tmdb_show_rating" type="bool" label="TMDB Bewertung anzeigen" default="true" />
<setting id="tmdb_show_votes" type="bool" label="TMDB Stimmen anzeigen" default="false" />
<setting id="tmdb_show_cast" type="bool" label="TMDB Besetzung anzeigen" default="false" />
<setting id="tmdb_show_episode_cast" type="bool" label="TMDB Besetzung pro Episode anzeigen" default="false" />
<setting id="tmdb_genre_metadata" type="bool" label="TMDB Daten in Genre-Listen anzeigen" default="false" />
<setting id="tmdb_log_requests" type="bool" label="TMDB API-Anfragen loggen" default="false" />
<setting id="tmdb_log_responses" type="bool" label="TMDB API-Antworten loggen" default="false" />
</category>
<category label="Update">
<setting id="update_channel" type="enum" label="Update-Kanal" default="0" values="Main|Nightly|Custom" />
<setting id="update_repo_url_main" type="text" label="Main URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/main/addons.xml" />
<setting id="update_repo_url_nightly" type="text" label="Nightly URL (addons.xml)" default="https://gitea.it-drui.de/viewit/ViewIT-Kodi-Repo/raw/branch/nightly/addons.xml" />
<setting id="update_repo_url" type="text" label="Custom URL (addons.xml)" default="http://127.0.0.1:8080/repo/addons.xml" />
<setting id="run_update_check" type="action" label="Jetzt nach Updates suchen" action="RunPlugin(plugin://plugin.video.viewit/?action=check_updates)" option="close" />
<setting id="update_info" type="text" label="Updates laufen ueber den normalen Kodi-Update-Mechanismus." default="" enable="false" />
<setting id="update_version_addon" type="text" label="ViewIT Version" default="-" enable="false" />
<setting id="update_version_serienstream" type="text" label="Serienstream Version" default="-" enable="false" />
<setting id="update_version_aniworld" type="text" label="Aniworld Version" default="-" enable="false" />
<setting id="update_version_einschalten" type="text" label="Einschalten Version" default="-" enable="false" />
<setting id="update_version_topstreamfilm" type="text" label="Topstreamfilm Version" default="-" enable="false" />
<setting id="update_version_filmpalast" type="text" label="Filmpalast Version" default="-" enable="false" />
<setting id="update_version_doku_streams" type="text" label="Doku-Streams Version" default="-" enable="false" />
</category>
</settings> </settings>

View File

@@ -1,49 +0,0 @@
# Release Flow (Main + Nightly + Dev)
This project uses three release channels:
- `dev`: playground for experiments
- `nightly`: integration and test channel
- `main`: stable channel
## Rules
- Experimental work goes to `dev`.
- Feature work for release goes to `nightly`.
- Promote from `nightly` to `main` with `--squash` only.
- `main` version has no suffix (`0.1.60`).
- `nightly` version uses `-nightly` and is always at least one patch higher than `main` (`0.1.61-nightly`).
- `dev` version uses `-dev` (`0.1.62-dev`).
- Keep changelogs split:
- `CHANGELOG-DEV.md`
- `CHANGELOG-NIGHTLY.md`
- `CHANGELOG.md`
## Nightly publish
1) Finish changes on `nightly`.
2) Bump addon version in `addon/addon.xml` to `X.Y.Z-nightly`.
3) Build and publish nightly repo artifacts.
4) Push `nightly`.
## Promote nightly to main
```bash
git checkout main
git pull origin main
git merge --squash nightly
git commit -m "release: X.Y.Z"
```
Then:
1) Set `addon/addon.xml` version to `X.Y.Z` (without `-nightly`).
2) Build and publish main repo artifacts.
3) Push `main`.
4) Optional tag: `vX.Y.Z`.
## Local ZIPs (separated)
- Dev ZIP output: `dist/local_zips/dev/`
- Main ZIP output: `dist/local_zips/main/`
- Nightly ZIP output: `dist/local_zips/nightly/`