1. It cannot retrieve codec information on `Firefox` by
`getSenders/getReceivers`
2. It can retrieve codec information on `Chrome` by `getReceivers`, but
incorrect, like this:

3. So, we retrieve codec information from `getStats`, and it works well.
4. The timer is used because sometimes the codec cannot be retrieved
when `iceGatheringState` is `complete`.
5. Testing has been completed on the browsers listed below.
- [x] Chrome
- [x] Edge
- [x] Safari
- [x] Firefox
---------
Co-authored-by: winlin <winlinvip@gmail.com>
1. When the chunk message header employs type 1 and type 2, the extended
timestamp denotes the time delta.
2. When the DTS (Decoding Time Stamp) experiences a jump and exceeds
16777215, there can be errors in DTS calculation, and if the audio and
video delta differs, it may result in audio-video synchronization
issues.
---------
`TRANS_BY_GPT4`
---------
Co-authored-by: 彭治湘 <zuolengchan@douyu.tv>
Co-authored-by: Haibo Chen(陈海博) <495810242@qq.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: winlin <winlinvip@gmail.com>
In the scenario of converting WebRTC to RTMP, this conversion will not
proceed until an SenderReport is received; for reference, see:
https://github.com/ossrs/srs/pull/2470.
Thus, if HTTP-FLV streaming is attempted before the SR is received, the
FLV Header will contain only audio, devoid of video content.
This error can be resolved by disabling `guess_has_av` in the
configuration file, since we can guarantee that both audio and video are
present in the test cases.
However, in the original regression tests, the
`TestRtcPublish_HttpFlvPlay` test case contains a bug:
5a404c089b/trunk/3rdparty/srs-bench/srs/rtc_test.go (L2421-L2424)
The test would pass when `hasAudio` is true and `hasVideo` is false,
which is actually incorrect. Therefore, it has been revised so that the
test now only passes if both values are true.
---------
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Co-authored-by: winlin <winlinvip@gmail.com>
To enable H.265 support for the WebRTC protocol, upgrade the pion/webrtc
library to version 4.
---------
Co-authored-by: john <hondaxiao@tencent.com>
Co-authored-by: winlin <winlinvip@gmail.com>
SrsUniquePtr does not support array or object created by malloc, because
we only use delete to dispose the resource. You can use a custom
function to free the memory allocated by malloc or other allocators.
```cpp
char* p = (char*)malloc(1024);
SrsUniquePtr<char> ptr(p, your_free_chars);
```
This is used to replace the SrsAutoFreeH. For example:
```cpp
addrinfo* r = NULL;
SrsAutoFreeH(addrinfo, r, freeaddrinfo);
getaddrinfo("127.0.0.1", NULL, &hints, &r);
```
Now, this can be replaced by:
```cpp
addrinfo* r = NULL;
getaddrinfo("127.0.0.1", NULL, &hints, &r);
SrsUniquePtr<addrinfo> r2(r, freeaddrinfo);
```
Please aware that there is a slight difference between SrsAutoFreeH and
SrsUniquePtr. SrsAutoFreeH will track the address of pointer, while
SrsUniquePtr will not.
```cpp
addrinfo* r = NULL;
SrsAutoFreeH(addrinfo, r, freeaddrinfo); // r will be freed even r is changed later.
SrsUniquePtr<addrinfo> ptr(r, freeaddrinfo); // crash because r is an invalid pointer.
```
---------
Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: john <hondaxiao@tencent.com>
```
../../../src/utest/srs_utest_st.cpp:27: Failure
Expected: (st_time_2 - st_time_1) <= (100), actual: 119 vs 100
[ FAILED ] StTest.StUtimeInMicroseconds (0 ms)
```
Maybe github's vm, running the action jobs, is slower. I notice this
error happens frequently, so let the UT pass by increase the number.
---------
Co-authored-by: Haibo Chen <495810242@qq.com>
Co-authored-by: winlin <winlinvip@gmail.com>
## How to reproduce?
1. Refer this commit, which contains the web demo to capture screen as
video stream through RTC.
2. Copy the `trunk/research/players/whip.html` and
`trunk/research/players/js/srs.sdk.js` to replace the `develop` branch
source code.
3. `./configure && make`
4. `./objs/srs -c conf/rtc2rtmp.conf`
5. open `http://localhost:8080/players/whip.html?schema=http`
6. check `Screen` radio option.
7. click `publish`, then check the screen to share.
8. play the rtmp live stream: `rtmp://localhost/live/livestream`
9. check the video stuttering.
## Cause
When capture screen by the chrome web browser, which send RTP packet
with empty payload frequently, then all the cached RTP packets are
dropped before next key frame arrive in this case.
The OBS screen stream and camera stream do not have such problem.
## Add screen stream to WHIP demo
><img width="581" alt="Screenshot 2024-08-28 at 2 49 46 PM"
src="https://github.com/user-attachments/assets/9557dbd2-c799-4dfd-b336-5bbf2e4f8fb8">
---------
Co-authored-by: winlin <winlinvip@gmail.com>
try to fix#3978
**Background**
check #3978
**Research**
I referred the Android platform's solution, because I have android
background, and there is a loop to handle message inside android.
ff007a03c0/core/java/android/os/Handler.java (L701-L706C6)
```
public final boolean sendMessageDelayed(@NonNull Message msg, long delayMillis) {
if (delayMillis < 0) {
delayMillis = 0;
}
return sendMessageAtTime(msg, SystemClock.uptimeMillis() + delayMillis);
}
```
59d9dc1f50/libutils/SystemClock.cpp (L37-L51)
```
/*
* native public static long uptimeMillis();
*/
int64_t uptimeMillis()
{
return nanoseconds_to_milliseconds(uptimeNanos());
}
/*
* public static native long uptimeNanos();
*/
int64_t uptimeNanos()
{
return systemTime(SYSTEM_TIME_MONOTONIC);
}
```
59d9dc1f50/libutils/Timers.cpp (L32-L55)
```
#if defined(__linux__)
nsecs_t systemTime(int clock) {
checkClockId(clock);
static constexpr clockid_t clocks[] = {CLOCK_REALTIME, CLOCK_MONOTONIC,
CLOCK_PROCESS_CPUTIME_ID, CLOCK_THREAD_CPUTIME_ID,
CLOCK_BOOTTIME};
static_assert(clock_id_max == arraysize(clocks));
timespec t = {};
clock_gettime(clocks[clock], &t);
return nsecs_t(t.tv_sec)*1000000000LL + t.tv_nsec;
}
#else
nsecs_t systemTime(int clock) {
// TODO: is this ever called with anything but REALTIME on mac/windows?
checkClockId(clock);
// Clock support varies widely across hosts. Mac OS doesn't support
// CLOCK_BOOTTIME (and doesn't even have clock_gettime until 10.12).
// Windows is windows.
timeval t = {};
gettimeofday(&t, nullptr);
return nsecs_t(t.tv_sec)*1000000000LL + nsecs_t(t.tv_usec)*1000LL;
}
#endif
```
For Linux system, we can use `clock_gettime` api, but it's first
appeared in Mac OSX 10.12.
`man clock_gettime`
The requirement is to find an alternative way to get the timestamp in
microsecond unit, but the `clock_gettime` get nanoseconds, the math
formula is the nanoseconds / 1000 = microsecond. Then I check the
performance of this api + math division.
I used those code to check the `clock_gettime` performance.
```
#include <sys/time.h>
#include <time.h>
#include <stdio.h>
#include <unistd.h>
int main() {
struct timeval tv;
struct timespec ts;
clock_t start;
clock_t end;
long t;
while (1) {
start = clock();
gettimeofday(&tv, NULL);
end = clock();
printf("gettimeofday clock is %lu\n", end - start);
printf("gettimeofday is %lld\n", (tv.tv_sec * 1000000LL + tv.tv_usec));
start = clock();
clock_gettime(CLOCK_MONOTONIC, &ts);
t = ts.tv_sec * 1000000L + ts.tv_nsec / 1000L;
end = clock();
printf("clock_monotonic clock is %lu\n", end - start);
printf("clock_monotonic: seconds is %ld, nanoseconds is %ld, sum is %ld\n", ts.tv_sec, ts.tv_nsec, t);
start = clock();
clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
t = ts.tv_sec * 1000000L + ts.tv_nsec / 1000L;
end = clock();
printf("clock_monotonic_raw clock is %lu\n", end - start);
printf("clock_monotonic_raw: nanoseconds is %ld, sum is %ld\n", ts.tv_nsec, t);
sleep(3);
}
return 0;
}
```
Here is output:
env: Mac OS M2 chip.
```
gettimeofday clock is 11
gettimeofday is 1709775727153949
clock_monotonic clock is 2
clock_monotonic: seconds is 1525204, nanoseconds is 409453000, sum is 1525204409453
clock_monotonic_raw clock is 2
clock_monotonic_raw: nanoseconds is 770493000, sum is 1525222770493
```
We can see the `clock_gettime` is faster than `gettimeofday`, so there
are no performance risks.
**MacOS solution**
`clock_gettime` api only available until mac os 10.12, for the mac os
older than 10.12, just keep the `gettimeofday`.
check osx version in `auto/options.sh`, then add MACRO in
`auto/depends.sh`, the MACRO is `MD_OSX_HAS_NO_CLOCK_GETTIME`.
**CYGWIN**
According to google search, it seems the
`clock_gettime(CLOCK_MONOTONIC)` is not support well at least 10 years
ago, but I didn't own an windows machine, so can't verify it. so keep
win's solution.
---------
Co-authored-by: winlin <winlinvip@gmail.com>
Please note that the proxy server is a new architecture or the next
version of the Origin Cluster, which allows the publication of multiple
streams. The SRS origin cluster consists of a group of origin servers
designed to handle a large number of streams.
```text
+-----------------------+
+---+ SRS Proxy(Deployment) +------+---------------------+
+-----------------+ | +-----------+-----------+ + +
| LB(K8s Service) +--+ +(Redis/MESH) + SRS Origin Servers +
+-----------------+ | +-----------+-----------+ + (Deployment) +
+---+ SRS Proxy(Deployment) +------+---------------------+
+-----------------------+
```
The new origin cluster is designed as a collection of proxy servers. For
more information, see [Discussion
#3634](https://github.com/ossrs/srs/discussions/3634). If you prefer to
use the old origin cluster, please switch to a version before SRS 6.0.
A proxy server can be used for a set of origin servers, which are
isolated and dedicated origin servers. The main improvement in the new
architecture is to store the state for origin servers in the proxy
server, rather than using MESH to communicate between origin servers.
With a proxy server, you can deploy origin servers as stateless servers,
such as in a Kubernetes (K8s) deployment.
Now that the proxy server is a stateful server, it uses Redis to store
the states. For faster development, we use Go to develop the proxy
server, instead of C/C++. Therefore, the proxy server itself is also
stateless, with all states stored in the Redis server or cluster. This
makes the new origin cluster architecture very powerful and robust.
The proxy server is also an architecture designed to solve multiple
process bottlenecks. You can run hundreds of SRS origin servers with one
proxy server on the same machine. This solution can utilize multi-core
machines, such as servers with 128 CPUs. Thus, we can keep SRS
single-threaded and very simple. See
https://github.com/ossrs/srs/discussions/3665#discussioncomment-6474441
for details.
```text
+--------------------+
+-------+ SRS Origin Server +
+ +--------------------+
+
+-----------------------+ + +--------------------+
+ SRS Proxy(Deployment) +------+-------+ SRS Origin Server +
+-----------------------+ + +--------------------+
+
+ +--------------------+
+-------+ SRS Origin Server +
+--------------------+
```
Keep in mind that the proxy server for the Origin Cluster is designed to
handle many streams. To address the issue of many viewers, we will
enhance the Edge Cluster to support more protocols.
```text
+------------------+ +--------------------+
+ SRS Edge Server +--+ +-------+ SRS Origin Server +
+------------------+ + + +--------------------+
+ +
+------------------+ + +-----------------------+ + +--------------------+
+ SRS Edge Server +--+-----+ SRS Proxy(Deployment) +------+-------+ SRS Origin Server +
+------------------+ + +-----------------------+ + +--------------------+
+ +
+------------------+ + + +--------------------+
+ SRS Edge Server +--+ +-------+ SRS Origin Server +
+------------------+ +--------------------+
```
With the new Origin Cluster and Edge Cluster, you have a media system
capable of supporting a large number of streams and viewers. For
example, you can publish 10,000 streams, each with 100,000 viewers.
---------
Co-authored-by: Jacob Su <suzp1984@gmail.com>
The heartbeat of SRS is a timer that requests an HTTP URL. We can use
this heartbeat to report the necessary information for registering the
backend server with the proxy server.
```text
SRS(backend) --heartbeat---> Proxy server
```
A proxy server is a specialized load balancer for media servers. It
operates at the application level rather than the TCP level. For more
information about the proxy server, see issue #4158.
Note that we will merge this PR into SRS 5.0+, allowing the use of SRS
5.0+ as the backend server, not limited to SRS 7.0. However, the proxy
server is introduced in SRS 7.0.
It's also possible to implement a registration service, allowing you to
use other media servers as backend servers. For example, if you gather
information about an nginx-rtmp server and register it with the proxy
server, the proxy will forward RTMP streams to nginx-rtmp. The backend
server is not limited to SRS.
---------
Co-authored-by: Jacob Su <suzp1984@gmail.com>